A few weeks ago, my father suffered an acute medical crisis. He was rushed to the hospital, but doctors could offer no immediate diagnosis, leaving us frozen in fear and uncertainty. After the initial shock subsided, the first thing I did was to book the next flight home. The second was to open ChatGPT.
I fed it everything I knew—his medical history, medications, test results. Within seconds, ChatGPT spit out likely causes and outcomes, its leading hypothesis later confirmed by doctors. This information steadied me during an anxious hour, but ChatGPT proved even more valuable in the days that followed. By flagging possible treatments and complications, it helped me advocate for my dad at the hospital and support his recovery at home.
I shared this experience with friends, and one, a physician, cautioned that while ChatGPT’s medical advice is often correct, it can be wrong, sometimes dangerously wrong. I disagreed with her assessment, I told her, from one doctor to another. (I have a PhD in philosophy.)
Large language models make mistakes, yes, but so do physicians. Frequently. Diagnostic errors kill or permanently disable 800,000 Americans each year.
So what’s the head-to-head?
A study from late 2024 graded diagnoses based on patient case histories. Physicians scored 74%, ChatGPT 90%. An April 2025 study in Nature employed an LLM fine-tuned for medicine. The gap was even wider.
Don’t read too much into these preliminary studies. My friend is right that you shouldn’t entrust major medical decisions to ChatGPT (not yet). But that’s not what I was using it for. I was trying to understand what my father was going through, what to expect, and most important, what questions to ask his healthcare providers. The alternative to chatting with GPT wasn’t chatting with an MD; it was being lost.
Sometimes when I recommend ChatGPT, people react as though I’ve invited them to strike their own skull with a hammer.
Skeptics point to users becoming dumber, their cognitive muscles atrophying as they outsource thinking to generative AI. It doesn’t help that LLMs can be sycophants, stroking users’ egos and telling them exactly what they want to hear, a tendency that predated but was amplified by a now-retracted ChatGPT update and may be driving a disturbing rise in chatbot-fueled psychotic delusions. More broadly, AI is polluting our feeds with slop—crowding out original creative work and strangling the very culture responsible for its training data.
Above all, LLMs hallucinate. Sci-fi author Ted Chiang famously likens ChatGPT to “a blurry JPEG of all the text on the web.” When tasked with restoring the missing resolution, it simply bullshits. Almost every day, you can read about someone blithely passing along fake legal cases, fake scientific studies, or fake book lists.
Worries about hallucination are pervasive, yet they’re almost invariably voiced by people who lack experience with the latest models—and who seem to forget that humans are not perfectly reliable either. ChatGPT is indeed a spotty source for one-off particulars, like a specific article or book. But that’s not what it’s for. As Henry Shevlin writes, this is like “complaining that Google search sucks because it couldn’t tell them where they put their glasses.”
An LLM gives you purchase not on isolated facts but on systems of knowledge distilled from its vast training material. With a little experience, you’ll learn that you can use this knowledge to interface with a world of facts. You don’t take ChatGPT’s outputs on trust. You use them—discovering their accuracy by seeing how they help you solve problems or elicit further knowledge from expert sources. That’s how ChatGPT helped me during my father’s medical crisis, but medicine is only one system of knowledge it can unlock.
I’m not here to declare ChatGPT an unalloyed good. Its benefits may be outweighed by producing cognitive torpor, fueling psychoses, and serving up slop. Rather, I just want to make the case that, if used wisely, it’s an invaluable tool. It may not help with your job, your art, or (uh) social companionship—I’m just talking everyday life management.
I have 24/7 IT support. Whenever I’m stumped by literally any piece of software, ChatGPT has the answer. Like most laptop monkeys, I check my inbox way too much, for absolutely no good reason. I told ChatGPT, and it wrote a Google Apps Script to batch my email into two daily deliveries, training me to stop reflexively checking when no new emails could have arrived. I’d never even heard of these scripts and had no idea how to run one. Guess how I learned?
There’s more. The same physician who cautions against using ChatGPT for medical advice strongly recommends using it to draft letters to insurance companies—really, for any complicated customer service request. It’s also great for comparing products. One of my favorite apps is going offline soon (Pocket), so I asked it for a point-by-point comparison of alternatives to choose the one that best suited my needs (Instapaper). After getting divorced and feeling adrift, another friend used it to walk her through the basics: applying for a credit card, buying insurance, filing taxes. I’m no handyman, and if I called myself one, my wife would laugh in my face. Yet with ChatGPT’s guidance, I’ve become handy-ish, fixing our dishwasher and installing a new deadbolt. (My wife now finds other reasons to laugh at me.)
We’re still just skimming the surface. A personal assistant who happens to be a polymath, ChatGPT has been fed enough material to develop expertise about medicine, fitness, finance, travel, nutrition, cooking, shopping, bureaucracy, elementary education, IT, home repair, and more. By tapping into these systems of knowledge, you can solve problems and learn more stuff. Have it translate your symptoms into precise questions before your next specialist appointment. Ask it to research insurance policies and produce side-by-side comparisons. Or just show it your broken appliances.
Can ChatGPT make mistakes? Yes. Especially when conventional wisdom is wrong. But across these domains, it’s still more reliable than going without. Moreover, it’s vastly more productive: foregoing ChatGPT might save you from one error, but you’ll forfeit a hundred truths.
As Ethan Mollick says, compare an LLM not to the best human but the best available human. Double-check its answers—especially when the stakes are high—and now that it has built-in web search, ChatGPT can help you find sources.
But it’s not just a search engine. Unlike Google, ChatGPT digests reams of information. It quickly synthesizes information scattered across many pages. It answers specific questions, responds to follow-ups, and simplifies information as needed. (Also unlike Google, it hasn’t been enshittified by ads and SEO.)
None of this means that it should entirely replace search or any other resource. Just now, my wife told me to check Reddit instead, and she was right. (Andy Masley has more advice about what LLMs are, and aren’t, good for.)
Journalist Ken Klippenstein says he is “consistently amazed by how useless ChatGPT is.” Assessments like this litter magazine essays and social media, but they reveal less about the tool than the authors: they're confessing that they don’t know how to ask it good questions.
Sometimes when I recommend ChatGPT, people balk for ethical reasons. I’ll now show that these reasons are utterly misguided.
Kidding. There are obviously reasonable concerns about LLMs and the companies behind them. Nevertheless, some popular criticisms are weaker than they appear.
The environmental objection is based on factual misunderstandings, Andy Masley shows. AI data centers do consume plenty of energy, but only 1-3% is attributable to LLMs; the lion’s share powers things like content-recommendation systems. YouTube, for perspective, uses about one hundred times as much energy as chatbots. Each ChatGPT query burns through the equivalent of boiling a cup of water—the average American uses 10,000 times as much energy per day. At both the individual and collective level, it’s just not enough to get worked up about. If you’re worried about climate change, as Masley says, there are a hundred better places to focus your attention.
Another common objection is that LLMs steal content from writers by using their work in their training data, but this argument is not at all clear-cut. As Richard Chappell observes, it’s disorienting to hear this complaint from liberals, who ordinarily have no scruples about downloading pirated videos or books. Fundamentally, Chappell argues, the objection assumes that creators have a natural right to control how others learn from their work—a principle we’ve never accepted for ordinary human learning. If it’s “fair use” when a person does it, why not when a machine is involved? As much as possible, we should let ideas be free; intellectual property rights are justified only when they increase incentives for innovation. My bet is that LLMs will increase innovation. But if they don’t, that means there will still be incentives for writers to write.
Critics also worry that AI will hollow out higher education, automate knowledge work and wipe out jobs, and unleash destructive technologies. Perhaps. It’s also plausible that AI will enhance education once institutions adapt, enrich knowledge work, create more jobs, and produce technologies that benefit humanity. The upsides may be just as big as the downsides. We don’t know which is more likely.
Any novel technology merits careful examination. I want more scrutiny of AI companies, not less. I’m particularly worried about whether the technology might eventually lead to dangerous concentrations of power.
In any case, you should be clear-eyed about what you’re giving up: a tireless and dedicated personal assistant that helps you solve many everyday problems. Is a boycott worth the price of not being informed about your loved one’s needs when they’re facing a medical crisis?
Academics hate that ChatGPT has become a weapon of mass plagiarism. Knowledge workers fear replacement. And more generally, educated progressives have come to distrust tech companies. However reasonable, this opposition breeds politically motivated reasoning about the capabilities of LLMs. By polarizing against AI, left-wing elites may be sabotaging themselves.
There are many uncertainties about AI’s impact, but it’s certain to be part of our future. You might as well begin taking advantage of it now.
Thank you for writing this. It's nice to see another academic--in the humanities no less--take a cautious, measured approach to LLMs. Too often I see hysterics from elite academics about AI, making silly claims that only reveal they have never used the tech themselves.
I use ChatGPT and Claude often as a kind of 'gut check' to test my assumptions and cross-check my sources. I see it as a kind of flawed 'hive mind' that taps into whatever information it can trawl online, often with better quantity than quality. However, just last week I had an unnerving experience with the most recent version of ChatGPT-- it completely manufactured a very plausible sounding quote from the public figure and then, when I asked it for its sources, admitted that it had generated the quote. I have also had recent experiences of it making a chart but then, when I asked for the underlying data, gave me numbers that didn't match the numbers on the chart (and admitted it had made up the numbers the first time), and asking for scientific references and having it give me citations that did not exist (but were close enough to existing citations to sound correct). (To be clear, this was all within the last couple weeks with the newest version). I worry about students trusting it, or people using it without expertise (case in point the recent MAHA report), because it can create a parallel reality in people's minds.