26 Comments
User's avatar
Michael Inzlicht's avatar

Thank you for writing this. It's nice to see another academic--in the humanities no less--take a cautious, measured approach to LLMs. Too often I see hysterics from elite academics about AI, making silly claims that only reveal they have never used the tech themselves.

Expand full comment
Darby Saxbe's avatar

I use ChatGPT and Claude often as a kind of 'gut check' to test my assumptions and cross-check my sources. I see it as a kind of flawed 'hive mind' that taps into whatever information it can trawl online, often with better quantity than quality. However, just last week I had an unnerving experience with the most recent version of ChatGPT-- it completely manufactured a very plausible sounding quote from the public figure and then, when I asked it for its sources, admitted that it had generated the quote. I have also had recent experiences of it making a chart but then, when I asked for the underlying data, gave me numbers that didn't match the numbers on the chart (and admitted it had made up the numbers the first time), and asking for scientific references and having it give me citations that did not exist (but were close enough to existing citations to sound correct). (To be clear, this was all within the last couple weeks with the newest version). I worry about students trusting it, or people using it without expertise (case in point the recent MAHA report), because it can create a parallel reality in people's minds.

Expand full comment
Victor Kumar's avatar

Yeah, like I say, it's bad for one-off particulars, you definitely have to double-check those.

Expand full comment
EC's avatar

In Star Wars Obi Wan Kenobi tells Luke, “Your eyes can deceive you; don’t trust them.” That might be fine for Jedis and their extra-sensory perception, but for the rest of us that’s just about the dumbest possible advice you can give anyone. Even if your eyes deceive you sometimes, they’re usually trustworthy. Even if they were trustworthy less than half the time, what they tell you is still informative, and you can update accordingly.

So it is with LLMs. So whenever I hear someone say they never trust LLMs (they usually just say “ChatGPT”), because they sometimes hallucinate, I immediately lower my opinion of that person. We learn to trust imperfect things all the time, like our eyes, our memory, medical screening tests, etc. We don’t trust perfectly and blindly (pun intended), but we just learn how to deal with the uncertainty.

Two small notes: is online piracy really a left-coded thing? Do right-wingers not do that? I would have thought it was predominately young poor-ish tech-savvy men. And same question about distrusting AI: is this really coded for the political left? I would have expected the distrust to cross partisan lines.

Expand full comment
Victor Kumar's avatar

Good points.

Piracy is a young person thing, so left-coded in that way. I don't think distrust of AI has become politicized yet, but it seems to be heading that way among elites, which often triggers broader politicization.

Expand full comment
Misha Valdman's avatar

I agree -- it's stunningly useful. It frees you from the burden of needing other people. But anything that frees you from that burden also frees other people from the burden of needing you. And so it ushers in a world in which no one needs anyone. And I don't think humanity is ready.

Expand full comment
Victor Kumar's avatar

Fascinating. I would want to use it to scaffold and support relationships, but that may not be its future.

Expand full comment
MarcusOfCitium's avatar

That has been the thrust of technological improvement since…at least the dawn of the Industrial Revolution. And indeed it’s a problem…or at least a mixed blessing. I think it’s huge problem with the modern world. I don’t think GPT is anywhere near as guilty in this though as Amazon (which I also use extensively) or the internet in general. Pocket computers, I mean cell phones. (I’m old enough to remember when we had to stop and ask strangers for directions or call a friend or relative on a payphone.)

Expand full comment
Misha Valdman's avatar

Most post-industrial technologies, Amazon included, made you more dependent on strangers and less on family, neighbors, and friends. But AI makes you less dependent on humans entirely.

Expand full comment
MarcusOfCitium's avatar

I think the main thing is we no longer need relationships with people. People used to be parts of communities (when that didn't just mean the subset of people who have the same kink or nerdy hobby or whatever). Luckily I have a wife and pets and parents nearby, but I don't even leave the house for work. (And I love it. But...there is something missing.)

A market-based transaction with a stranger isn't the same. I don't have any relationship with any Door Dash delivery person, nor do I care to--half of them look like they're obviously on fentanyl or something.

But I guess I do technically depend on them in a way. I don't see how anything would be any different if I didn't, but I suppose long term, it could be a problem when people literally don't need other people at all, even to do manual labor in a far off country to make our continued existence possible, because robots could provide you with everything you need (maybe even companionship!) even if you were literally the only human.

Expand full comment
Lance Taylor's avatar

Very thought-provoking discussion, Victor, and I really like how you started it off with your personal story. As an AI-enthusiast, I'm filled with excitement about the possibilities. People who focus on the hallucinations typically don't hold humans to the same standard. Yes we should be wary of the possibility of hallucinations but at least with AI, we can direct it to cite sources.

I think the better we get at prompt engineering, the more we'll build into our queries failsafes like telling ChatGPT to "show your thinking process, cite your sources, and explain your conclusions."

Expand full comment
Anna Eplin's avatar

Great post! These points all reflect my own experience and opinions about ChatGPT, but I haven’t really heard others saying this on the internet. Thank you for writing this piece and sharing it! I’m excited to see how our world will grow through the added intelligence of AI.

Expand full comment
Daryl Cameron's avatar

Great article, Victor. I appreciated a strong argument for epistemic and ethical humility in this space. I've been struck to see how many social scientists working on AI empathy seem dismissive of the mere possibility of utility of empathetic expressions from AI, in a way I find ethically problematic. Sometimes, there is no immediate or reliable human option for empathy or care, and I appreciated that your article noted that point.

Expand full comment
Josh May's avatar

Great post! Agreed, LLMs are quite useful and lots of people are ignoring or overly dismissive of that. But let me play devil’s advocate. I could see a post like yours 15 years ago about social media:

“C’mon, it helps you stay connected to people and see what’s trending in the zeitgeist. There are even these fun personality quizzes and short clips of hilarious videos. Sure, there might be downsides, but it sure seems great right now!”

But it’s not 15 years ago. We’ve seen how these technological advances have been pretty awful for mental health, for politics, and more. Not for everyone, but for many. So I don’t blame people who in this moment are predicting that embracing this technology is just participating in more of the same sad trajectory of human society.

You’re rightly identifying that some people are opposed to LLMs and that this leads them to over-inflate the present limitations or harms. But that might not get to the heart of the dispute, which is about whether this is all going to end up good for us overall. Can we really set aside now whether the technology or its likely future is going to be more harmful than beneficial?

Expand full comment
Victor Kumar's avatar

Interesting! I don't see why we should analogize ChatGPT to social media rather than genuinely helpful technologies like Google.

But stepping back from that analogy, I agree that a big question is what kind of future AI holds for us. I tried to get at that in the last section of the essay (ethical concerns regarding higher-ed, automation, destructive technologies, etc.). I think your analogy points specifically to impacts on mental health and intelligence.

I don't think we have a good sense of whether the impact will be positive or negative, as I said. But I also don't think that boycotts make sense even if the impact is likely negative. It's going to be developed. Get to know the tech so you can be an informed critic.

Expand full comment
Brian Gallagher's avatar

Solid post. I recently used an LLM (Grok) to troubleshoot my malfunctioning vacuum. I find it useful as an explanation tool as well, but there I’m more cautious and verify what it’s saying.

Expand full comment
Victor Kumar's avatar

Thanks! You learn more and more where to trust and where to double-check, but I am under no illusion that I've got that distinction figured out.

Expand full comment
Michael Dickson's avatar

Thanks Victor.

"Especially when conventional wisdom is wrong." This proviso seems really important in areas (like philosophy, but not just there) where creative thinking is important. I think that's partly why I hate it so much when students use it to write essays. It isn't the cheating so much as it is that the essays are just boring. I preferred the time when students wrote less well informed, even less well reasoned, but more interesting essays. (Lots more to say there.)

Software usage? Medical information? Home improvement? Yeah, sure, why not? Conventional wisdom isn't terrible there. I learned home improvement from reading books and watching TV. I don't see why ChatGPT shouldn't be another tool in that arsenal.

Expand full comment
Victor Kumar's avatar

Thanks! I plan to eventually write something on using LLMs in intellectual work, where I think it can be helpful in certain ways but is far more limited. Your point about heterodoxy and creative thinking is relevant there too.

In my experience so far on home improvement and the rest, it blows away the rest of the arsenal.

Expand full comment
Jindy Mann's avatar

"Assessments like this litter magazine essays and social media, but they reveal less about the tool than the authors: they're confessing that they don’t know how to ask it good questions."

This neatly encapsulates why AI is not intelligence. In an interaction with an intelligent human, they would either be able to interpret a 'bad' question for its original intent, or ask a question that invites clarification that allows the dialogue to continue and deepen. Even a child can infer the intended meaning from certain requests and questions that it doesn't fully understand.

AI is not really the intelligence it claims to be if the question needs to be perfectly framed, it's more like a database that needs the query to be coded in the right way.

Expand full comment
Glenn Toddun's avatar

“And so, while the end-of-the-world scenario will be rife with unimaginable horrors, we believe that the pre-end period will be filled with unprecedented opportunities for profit."

Expand full comment
E2's avatar

"... intellectual property rights are justified only when they increase incentives for innovation."

Do you apply this justification for theft of other kinds of work, or only unique creative works of the mind?

Expand full comment
Everyman's avatar

Personally, I just haven't had a use case to justify a monthly subscription. I don't write for a living or do knowledge work in the classic laptop sense. I am in the mental health field, which brings a whole host of privacy concerns. My inbox is flooded with ads and sales reps pushing their latest AI product on me. I have no idea how to evaluate these and asking ChatGPT seems like asking a Ford dealer what the best brand is. My life is fairly boring right now with work, raising a kid, and....not much else. I do look forward to AI notes summarizing sessions, but privacy still remains a huge concern of mine. In my personal life, I am in shape, I have my diet sorted out, and I read articles/books/etc. I do use AI like OpenEvidence to research nuanced cases, but just the other day I found an error in OpenEvidence that makes me question its outputs. Regarding being replaced by AI, I think healthcare will be slower than most fields both for regulatory and human reasons. Coders in the Bay Area may have no problem talking to Claude but the median American citizen does not even know what kinds of questions to ask themselves and they certainly don't trust a chatbot yet. There are also other issues where they do need to be challenged that bots just don't quite do (the sycophancy problem).

I just..why do I need it?

Expand full comment
Cyberneticist's avatar

It's too expensive and doesn't work for my use cases.

Expand full comment