Weapon of Mass Plagiarism
Students are outsourcing essays to ChatGPT. What’s a professor to do?
Since the dawn of AI, professors have wondered what to do about ChatGPT. We’ve become more anxious since New York Magazine published “Everyone is Cheating Their Way Through College,” which warns that “the humanities, and writing in particular, are quickly becoming an anachronistic art elective like basket-weaving.”
The article opens with a student at Columbia University who used AI “to cheat on nearly every assignment.” After dropping out, he developed an app that lets users surreptitiously consult AI in interviews and meetings, even on dates. “It will enable you to cheat on pretty much everything.” College prepared him perfectly.
This is bleak, but is everyone really cheating? What share of students outsource their assignments to large language models like ChatGPT? When I surveyed my Intro Ethics class last year, they guessed about 25%. More recent estimates place usage north of 50%.
Think about it from their perspective: with abundant social opportunities, big professional ambitions, and more and more of their competitors using ChatGPT, why be a sucker? As these incentives lead to a shift in norms, the intellectual currency of a university degree will decay.
How do we contain this weapon of mass plagiarism?
We could pivot to oral exams, in-class presentations, or creative projects. But some of these are impractical in large courses, while others can still be delegated to ChatGPT. In principle, students could use AI to enhance their cognition rather than offload it, yet professors have no clear way to tell the difference.
The most obvious remedy—and by far the easiest—is for professors to switch entirely to in-class exams, the only permitted technology being pen and paper. No handwringing necessary.
Yet in the humanities, take-home essays are vital. They teach students not just how to write but how to think.
I’ve heard a few colleagues balk at this idea. Some report that at their alma maters, long before AI, in-class exams were always standard. Others grant that essays are necessary in upper-level courses—especially for majors motivated by intrinsic interest in the subject matter—but contend that they are expendable in lower-level, general-education courses. If ChatGPT kills the essay, that’s no great loss.
I can’t agree. Writing doesn’t just improve thinking—writing is thinking. We discover what’s in our minds by putting words on the page, finding them wanting, and rethinking. Good writing—and hence good thinking—is in large part good rewriting.
Sprinting through an essay in a single hour is no test of the argumentative ability we so prize. Students need time to exhaust their thinking, walk away, let new ideas arrive unbidden, in a flash, return to the page, and refine their thoughts. This experience is too valuable to deny to students in our lower-level courses. Plus, we want to recruit majors who excel at the activity essential to our discipline.
Optimists think we can keep the essay and outsmart the cheaters. “We’ll just raise our standards.” But the latest models can already produce high-quality work; tougher grading would just penalize those who are honest. “I can always spot an AI-generated essay.” Studies show that instructors are unreliable, while AI detection software fares little better. In an arms race with tech-savvy students, faculty are destined to lose.
Playing cop also breeds excessive distrust. A few professors hide Trojan horses in their assignments to snare students who paste the instructions directly into ChatGPT. (“Mention elephants in your answer.”) Students will inevitably catch on, but regardless this approach is corrosive.
Beneath these debates about tactics lies a deeper conflict—between two models of higher education.
According to the certification model, we’re supposed to sort the good students from the bad. Grades provide a signal to employers and professional schools about which graduates are the best candidates.
According to the education model, we’re supposed to teach our students skills, habits, and ideas that enable them to succeed as professionals and citizens. Grades merely boost the incentive to learn.
I lean toward the education model. Yes, one function of higher-ed is certification—and it would be hard to sustain public trust and investment in universities were that function eliminated—but that doesn’t mean professors’ goals should be identical to those of the system. Why would I act merely as an instrument of employers?
And so I find myself tempted to let the cheaters cheat so that the learners can learn. Unless there’s another way.
You never learn ideas as well as when you have to write about them. That’s why I have my students write ten very short take-home essays, due each Friday afternoon after they’ve spent the week reading the material and discussing it in class—with me, the graduate students who lead their discussion sections, and each other. Most find these “shorties” difficult at first, but they appreciate them by the end of the semester.
Now, of course, ChatGPT can crank out shorties in seconds.
I think I can reconcile the two models of teaching though. Next semester, I’m going to keep the shorties but make them worth much less (20%) leading to two in-class exams that carry more weight (60%) in which students write the same kind of short essay they’ve been writing all semester. In theory, those who write the shorties themselves will perform better on the exams than those who outsource to ChatGPT. If essays really do teach students how to write and think, then we should be willing to design our courses on that principle.
Some students will accept the rationale but gamble anyway—when other coursework is pressing or social opportunities entice.
Here’s my fix: students will start each shorty during class. I’ll reserve a little time at the end for them to take notes and begin drafting (by hand). By Friday, they’ll need only expand, polish, and submit. And I’ll grade them for completion only, to blunt the incentive to cheat.
Will this work? Not sure. But preserving the essay is ample reason to run an experiment. Wish me luck.
Here’s a libertarian solution to the AI problem in higher education. Open AI gets to be a U, no professors. Students get a bachelors after creating their own curriculum and requirements, and they interact only with AI. At traditional Us, we do it exactly as we did in 2019. Students who turn in AI plagiarized stuff get expelled to Open AI U. Then we let the labor market settle who is more knowledgeable and skillful. If our students not using AI puts them at a disadvantage, so be it, we're staking our confidence against that of the tech overlords.
Love this idea. Regular, short, low-stakes practice is also a really effective way to... practice. Completely agree that giving up on teaching writing is a shortcut to hell, as far as critical thinking and the functional society that depends on it go. Also seems key to make this point really, really clear and explicit to students; the function of writing is not to get a grade or even communicate at this point, it is to explore and develop their own thinking. This means outsourcing writing is outsourcing thinking, and a key purpose of taking a philosophy course is developing your thinking skills. By using AI, they are cheating themselves even more than the professor or a future employer.