Save the Student Essay
Without getting played
How do you learn philosophy? By doing it, of course. You read great texts and understand opposing philosophical views. Then you try to form a view yourself—initially through reflection and dialogue but eventually, and more seriously, by thinking and rethinking on the page.
To do philosophy the right way, the “slow cook” method is recommended. You let ideas stew, unattended, bubbling up to the surface once they’re ready.
Yet at this point it’s educational malpractice for professors to blithely assign slow-cooked (take-home) essays. You’re playing your students. You’re playing yourself.
If you think large language models can’t produce passable essays in most courses, you’re misinformed about the technology. And if you think AI-generated essays are easy to spot, you’re falling for the toupee fallacy: noticing only the obvious cases, letting them stand in for all the rest.
In small seminars with advanced majors, I trust my students enough to continue assigning take-home essays—balancing the risk by putting more weight on participation and oral exams. Some colleagues assign in-class presentations.
But none of this scales to large, lower-level courses. A dozen oral exams is a pleasure. A hundred is a nightmare.
Replace essays with in-class exams, then? Maybe that’s best, and maybe it’s where we’ll all land in the end, but I don’t think we should settle too quickly:
Sprinting through an essay in a single hour is no test of the argumentative ability we so prize. Students need time to exhaust their thinking, walk away, let new ideas arrive unbidden, in a flash, return to the page, and refine their thoughts. This experience is too valuable to deny to students in our lower-level courses. Plus, we want to recruit majors who excel at the activity essential to our discipline.
Some professors are trying to save the student essay with shared Google Docs, sleuthing through the edit history for suspiciously large deposits. But savvy students can just type out AI-generated essays by hand (call that artisanal plagiarism). There are even browser extensions that make pasted text look typed. Besides, it’s just miserable to build distrust into your day-to-day interactions with students.
What about John Robison’s multi-day essay exam that mimics the essay-writing process? Specialized software gives students access to only the articles and their own notes. They begin writing their essay on Day 1, then rethink, revise, and complete on Days 2 and 3. Nice idea, though it limits the opportunity for ideas to stew—while allowing students to consult an LLM between classes.
Is essay-writing cooked?
Here’s my solution: students do low-stakes take-home essays as preparation for in-class essay exams. That way, they get a taste of real philosophy while practicing for the exams that make up the bulk of their grade.
There are many reasonable ways to implement this approach, but let me sell you on mine. I want my students consistently engaged, and reading alone isn’t enough: you never understand ideas as well as when you have to write about them. So I had them submit short essays on each week’s material every Friday afternoon, and then, starting mid-semester, take three in-class exams where they wrote the same kind of essay.
How did it work out?
Last semester, exams and in-class participation made up 80% of the grade; I graded the essays for completion only. This eliminated the incentive to cheat…right? Since by not practicing for the exams they’d only be cheating themselves? Lol.
As it happens, a highly-touted AI-detector (Pangram) was released just as the semester started—a natural test of students’ honesty. For the final three rounds of short essays, 13 of my 77 students tripped the detector. A couple were probably false positives, but I bet there were false negatives too. In the age of AI, 83% of students doing the work themselves is almost cause for celebration.
Then came the confrontations.
Half the students denied cheating—unconvincingly. But even when they couldn’t explain the ideas in their essay, that wasn’t enough evidence to penalize them or initiate university disciplinary procedures. The cost of false conviction was too high. The lesson I unwittingly imparted: cheat and lie, and you’ll get away with it.
I ended up going easy on the students who did confess, not wanting to treat the truth-tellers much more harshly than the fibbers. One honest student, overcome by a panic attack, wept and asked me not to go easy on her—to inflict a more serious penalty.
So I’m adjusting.
First, I’ll shift to an honor system, not bothering to screen for AI-generated essays. The detectors aren’t sufficiently reliable, but even if they were, it’s not worth it. We need to preserve trust with our students. Sure, play cop sometimes, but as little as possible; don’t make it central to your role.
Second, I’ll reduce the essays from 20% of the final grade to 10%. Some students will still outsource and miss out on exam practice—but now for even less short-term benefit. The value to the rest of the class of writing the essays themselves—especially for those intrinsically motivated to really think through ideas—is vastly greater.
Third, I’ll draw an even clearer link between the essays and the exams. I’ll give the students the same structured prompts for each and keep the exams open-book—they’ll get physical copies of two assigned articles and have to pick one. Students will have extra reason to put in effort on a take-home essay if the article they write about might also be an option on the exam.
Fourth, I’ll follow the lead of David Friedell and let students do a longer essay instead of the final exam, if they choose. The catch is that they’ll have to meet with me twice; this will select for the (few) enthusiasts and screen out the (many) casuals. These meetings will also minimize the likelihood of outsourcing.
Is this the best of all possible plans? No, but it’s better than getting played, better than killing the essay. And can it be refined? Let me cook.






I have reached the bizarre point that when I get a properly crappy essay, I think to myself "hey at least they didn't use AI, or it would actually be better", and then I am inclined to give them a slightly higher grade just to reward them for having done this bad work for themselves.
I TAd intro political theory at my university three times for three different professors, and the in-class essays combined with a single final paper worked best: they get to practice the relevant skills in the essays, and then test them out in the final paper. It's not ideal, but I've learned that trusting students to actually care about learning makes them more likely to... actually care about learning. There are 360 students in that class, and it's impossible to punish the AI slop. But the students who use AI are already punishing themselves enough. At least that's how I see it.