This semester, I’m experimenting with a new grading scheme in my college classes in which students must choose between a writing path (two in-class exams plus a semester-long, scaffolded written paper with rigorous requirements) or an exam-only path. The writing path is required to be eligible for an A. The max grade for the exam-only path is an A-, no matter how well you do on the exams.
I’m doing this primarily because I received so many sketchy probably AI-written papers last semester. I don’t mind students using AI to assist them, but just not replace the writing process. I want to continue teaching writing to students who take it seriously and will benefit from it, but hopefully deter students who don’t want to do the work themselves. Because I really don’t want to read their terrible papers with citations from academic journal articles I know they haven’t actually read.
In designing this grading scheme, I found ChatGPT quite helpful in understanding what features of the scheme might be considered unfair, and how I could frame it in a way that made sense and is / appears fair. I just straight up asked why might this be considered unfair, what sorts of pushback might I receive, what are some different possible ways I might go about constructing this new grading scheme. Chat had some really well-thought out ideas.
ChatGPT is like an intellectual, creative partner for me. I have all kinds of deep, back-and-forth conversations with it about things I’m thinking about, projects I’m working on, or things I want to better understand. This may be controversial, but to some extent, I think it’s able to have original thoughts. Maybe not completely original, but it’s not just repeating verbatim information it’s been fed, or copying something straight from wikipedia. It’s synthesizing information, analyzing and comparing ideas, and putting forth its own ways of explaining complex ideas.