It’s ubiquitous. Students plunder it for their essays rather than thinking for themselves. Critics charge that it’s riddled with errors.
I’m talking, of course, about Wikipedia.
Except, contrary to early concerns, Wikipedia is extremely reliable and enriches rather than impoverishes our thinking. The same is true of calculators, novels, printing presses, and even the written word itself—despite each having sparked its own moral panic. Your objections to ChatGPT and other LLMs? Refuted.
Wait, come to think of it, other campaigns against new information technologies have aged better. Look at what smartphones, social media, classroom Chromebooks, short-form videos, and recommendation algorithms have done to our intellects. Maybe LLMs will be as harmful or worse.
Maybe. Regardless, the analogy with Wikipedia is instructive. Non-native users habitually fret about new information technologies that turn out to be not just beneficial but transcendent. That we might eventually regard ChatGPT the way we now regard Wikipedia is hardly a defense, but it is a warning: criticisms may look more compelling than they really are because AI is new—and we are old.
Another lesson is for academics specifically. Educators were initially pessimistic about Wikipedia because they imagined only how lazy students might misuse it. But the website proved useful in countless ways that have nothing to do with schoolwork—satisfying everyday curiosity, learning about other countries, exploring pop culture, and much more. Even the impact on education ultimately turned out positive. Wikipedia is a superb tool for students to acquire the basics of many subjects (with certain exceptions, like philosophy).
Likewise, academics are skeptical about ChatGPT because they fixate on it as a weapon of mass plagiarism. Instructors are scrambling to figure out if they can reliably detect AI-generated term papers (they can’t) or if they have to redesign their courses (they do). Their jobs are becoming harder as higher ed destabilizes and tech companies once again rake in billions by manipulating the public and disrupting the institutions we depend on.
Yet LLMs have countless other uses. Wanting to ban LLMs because students use them to cheat is like wanting to outlaw cars because criminals use them as getaway vehicles. LLMs can dull your mind, yes, but they can also sharpen it.
The remarkable thing about LLMs is that they’re so general purpose that they can replace virtually all reasoning. That’s also what makes them so treacherous, according to AI-whisperer
Outsourcing specialized cognitive tasks—memory to your notepad, arithmetic to your calculator—frees up your mind for more worthwhile activities and can even scaffold your thinking, allowing you to ascend to new intellectual heights. Outsourcing reasoning, by contrast, means forgoing precisely what’s most valuable and can prevent you from ever leaving the ground. “What we stand to lose is not just a skill but a mode of being: the pleasure of invention, the felt life of a mind at work,” writes poet Meghan O’Rourke in The New York Times.
Indeed, many college students seem to be sacrificing their intellect. In The New Yorker, Hua Hsu describes students brazenly cheating with AI, sometimes even fooling themselves about what they’re doing. One says, “Any type of writing in life, I use AI.” When assigned the work of a 19th century abolitionist—“obviously, I ain’t trying to read that”—the student prompts AI for a summary. But even the summary is too long, so he asks for bullet points.
Writing is thinking, according to an old saw as well as a recent Nature editorial, yet students are using LLMs to bypass not just writing but also reading. That’s hardly unprecedented—SparkNotes has been around for decades—but what is new is being able to condense the life out of literally any text. AI may thus contribute to an ongoing and alarming decline in the quality of literacy.
Some professors estimate that over half of students are outsourcing their work to LLMs. Seems like a short-sighted way to spend hundreds of thousands of dollars, compounding financial debt with “cognitive debt.” But as I’ve suggested elsewhere, “think about it from their perspective: with abundant social opportunities, big professional ambitions, and more of their competitors using ChatGPT, why be a sucker?” Many students pursue formal education not to learn but to signal and network. The students aren’t foolish; their incentives are bad.
AI skeptics have bad incentives too. They so badly want to believe the worst about LLMs that they lap up any criticism. Last month, MIT researchers released a preprint titled “Your Brain on ChatGPT,” likening LLM-use to consumption of hard drugs. Would you believe that participants who outsourced their essay to an LLM were less able to quote from it, less likely to feel ownership over the essay, showed lower activation in areas of the brain associated with cognitive effort, and performed worse on a subsequent essay assignment? Shocking, I know.
As Mollick explains, the problem isn’t that the study’s conclusions are false but that it overgeneralizes. AI does not inevitably disrupt learning. By default, LLMs are so helpful that they will eagerly do all the work without being asked. But with effective prompting, studies suggest that they can enhance learning outcomes. For instance, LLMs can be instructed to act as tutors that help students think through problems on their own. In countries where resources are scarce, AI-assisted education could be invaluable.
Studies like these show that LLMs have more potential than skeptics assume. Yet they don’t tell us much about the future—about how LLMs will be most commonly used and whether they will improve our cognitive abilities (like the written word) or degrade them (like smartphones). Given that people tend to prefer learning strategies that are easier but less effective than alternatives, the pessimists may well turn out to be right: LLMs will lay waste to higher education and induce cognitive atrophy for most people.
But you’re not most people. Whether the campaign de jour is prudent concern or moral panic, you can use LLMs to enhance your cognition, if you use them wisely.
Academics tend to think of LLMs as essay-generators with perhaps one or two other functions. When you’re a nail, every hammer looks like it’s trying to pummel you.
An LLM is actually like a Swiss army knife. It can help you brainstorm, generate examples, and copy-edit. It’s also great at drafting generic emails. But when it matters, the last thing you want to use an LLM for is to produce copy for others to consume—the copy is bad and outsourcing writing is bad for you. Their real value lies, rather, in generating inputs to your own thoughts.
Allow me to illustrate with a pair of extremely outdated cultural references that will finally end the charade and out myself as not an actual member of Gen-Z. (Sorry, besties.)
Ever heard of a player piano? They were common early in the 20th century. (Pawpaw had to burn ours for fuel when the Great Depression hit.) Attach a paper roll with holes corresponding to notes, and a player piano performed any song you like. It offered mechanical imitation of musical artistry. That’s roughly how many people still think about LLMs—machine-generated imitation of genuine thought.
Now consider a choose-your-own-adventure storybook. They’re still around, but their popularity peaked in the 1990s. (Back then, I was the pawpaw reading them to my grandchildren.) Instead of reading straight from beginning to end, you’re given a series of choices along the way. If you want to search the abandoned house for more clues, turn to page 26. If you want to run home and call the police, turn to page 37. One book, many stories.
An LLM is like a choose-your-own-adventure research book—one with infinite possibilities. It enables you to chart a unique path through a vast body of knowledge. You decide what you want to learn and then ask a series of follow-up questions as each answer sparks further curiosity.
As with other cognitive-enhancing information technologies, the key is iterative engagement that upgrades cognitive functioning—rather than one-shot prompting. Each exchange should deepen your understanding, leading you to synthesize information and refine your inquiry. In the process, you learn not just to seek answers to pre-existing questions but to discover new worthwhile questions that AI is best poised to answer.
But won’t LLMs just hallucinate? Again, this is one of those criticisms that’s popular precisely because it fuels pre-existing hostility. As I’ve written elsewhere:
“ChatGPT is indeed a spotty source for one-off particulars, like a specific article or book. But that’s not what it’s for…. An LLM gives you purchase not on isolated facts but on systems of knowledge distilled from its vast training material…. You can use this knowledge to interface with a world of facts. You don’t take ChatGPT’s outputs on trust. You use them—discovering their accuracy by seeing how they help you solve problems or elicit further knowledge from expert sources.”
To be clear, yes, LLMs can hallucinate, but you can dodge this pitfall by gaining an intuitive sense of when they’re reliable and double-checking if there’s any doubt.
In a previous essay, I focused on using LLMs for everyday life management—to ask your doctor better questions, overcome software glitches, get help tackling DIY projects, and so on.
For intellectual work, you need other ways to confirm what an LLM tells you. If you’re seeking an overview of a topic that you don’t know much about, you can read the articles it will provide as sources, if you ask. Things are different when you’re an expert yourself: you become the judge. An LLM takes you on an expedition through familiar terrain, and you decide what’s accurate and important, what’s not.
Try using ChatGPT’s Deep Research to generate a survey of a topic with an annotated bibliography so that you have a roadmap for studying primary sources. Or upload a PDF (or ten) and ask questions about the contents so you can locate key arguments and where objections are handled (ask for page numbers). Or, like
, convey your level of understanding and background, ask what a textbook on a given subject would look like, plus a description of each chapter’s contents, and then request expansions as needed.You can also treat an LLM as an interlocutor, as
suggests. Share an argument and ask for constructive criticisms, perhaps from a particular perspective—a feminist, an economist, or even a particular public figure whose writings, and whose critics’ writings, are well-represented in the pretraining data. Use voice mode to become absorbed.Your mileage may vary. The more esoteric your subject, the less it’s represented in pretraining data. Ultimately, to discover how LLMs can enhance your cognition—given your interests—you should spend a few hours experimenting and begin charting your own path.
What next? If you’d like to make your college course ChatGPT-proof without eliminating take-home essays, turn to page 44. If you’d like to know how useful LLMs are for everyday life management, turn to page 55.
I’ve taken on the Sisyphean task of trying to convince people not to use “ChatGPT” as the generic term for all the LLM assistants. Claude and Gemini will do just as well (sometimes a bit better, sometimes a bit worse, but with different personality in any case). It might even be worth using Grok or some of the open source ones like Mistral, LLaMa, and DeepSeek occasionally. But the big point is that OpenAI is not especially trustworthy, and we shouldn’t be turning the whole field over to them.
You say that AI doesn't necessarily disrupt learning. But I think it does -- indirectly -- by undermining its value. The problem with AI isn't that it's a bad teacher (quite the contrary) but that it makes learning futile and unnecessary. It made sense to want to know as much as your teachers did back when the expectation was that they'd retire and you'd replace them. But it's an entirely different ballgame when your teacher is the one that's planning to replace you.