Discussion about this post

User's avatar
Michael Inzlicht's avatar

Thank you for writing this. It's nice to see another academic--in the humanities no less--take a cautious, measured approach to LLMs. Too often I see hysterics from elite academics about AI, making silly claims that only reveal they have never used the tech themselves.

Expand full comment
Darby Saxbe's avatar

I use ChatGPT and Claude often as a kind of 'gut check' to test my assumptions and cross-check my sources. I see it as a kind of flawed 'hive mind' that taps into whatever information it can trawl online, often with better quantity than quality. However, just last week I had an unnerving experience with the most recent version of ChatGPT-- it completely manufactured a very plausible sounding quote from the public figure and then, when I asked it for its sources, admitted that it had generated the quote. I have also had recent experiences of it making a chart but then, when I asked for the underlying data, gave me numbers that didn't match the numbers on the chart (and admitted it had made up the numbers the first time), and asking for scientific references and having it give me citations that did not exist (but were close enough to existing citations to sound correct). (To be clear, this was all within the last couple weeks with the newest version). I worry about students trusting it, or people using it without expertise (case in point the recent MAHA report), because it can create a parallel reality in people's minds.

Expand full comment
24 more comments...

No posts