I’ve taken on the Sisyphean task of trying to convince people not to use “ChatGPT” as the generic term for all the LLM assistants. Claude and Gemini will do just as well (sometimes a bit better, sometimes a bit worse, but with different personality in any case). It might even be worth using Grok or some of the open source ones like Mistral, LLaMa, and DeepSeek occasionally. But the big point is that OpenAI is not especially trustworthy, and we shouldn’t be turning the whole field over to them.
You say that AI doesn't necessarily disrupt learning. But I think it does -- indirectly -- by undermining its value. The problem with AI isn't that it's a bad teacher (quite the contrary) but that it makes learning futile and unnecessary. It made sense to want to know as much as your teachers did back when the expectation was that they'd retire and you'd replace them. But it's an entirely different ballgame when your teacher is the one that's planning to replace you.
This is something that I’ve been thinking about a lot. LLMs like Open Evidence have been a godsend for info on nutritional and medical science. But it still feels like cheating to use LLMs to brainstorm for philosophy ideas. Do you think philosophers should actively avoid using LLMs for brainstorming?
I need to think more carefully about the ethics, but my first pass is that no, it's not cheating if you're using an LLM as an interlocutor, no more than using Google. In general, it's prudentially wise to exhaust your own thoughts before seeking input from an LLM; that's how you achieve the iterative engagement that increases skill and quality of ideas. Ultimately AI will raise the baseline, and philosophical skill will be about what your intellect adds to the tool. Wdyt?
I think it’s plausible that the use of AI can be used to raise the baseline especially in the near future. It does strike me that there’s a trade off here. It seems like philosophers value the lack of efficiency that comes with brainstorming and generating ideas. There’s an aesthetic value of the thinking philosophers who thinking painfully hard and slow to develop new and original ideas. AI might erase that since one is able to engage with it at a much faster rate.
I think so too. The worry is that future philosophy student may do more than just enhance their thinking with AI but just offload it. This will depend on how good AI gets at philosophy (I do phil lang and ChatGPT is quite bad at it), but it is a worry nonetheless.
I’ve taken on the Sisyphean task of trying to convince people not to use “ChatGPT” as the generic term for all the LLM assistants. Claude and Gemini will do just as well (sometimes a bit better, sometimes a bit worse, but with different personality in any case). It might even be worth using Grok or some of the open source ones like Mistral, LLaMa, and DeepSeek occasionally. But the big point is that OpenAI is not especially trustworthy, and we shouldn’t be turning the whole field over to them.
Haha sorry, I'll take this into consideration, only used it for the sake of a grabby title. (I use ChatGPT, Claude, and DeepSeek mostly.)
You say that AI doesn't necessarily disrupt learning. But I think it does -- indirectly -- by undermining its value. The problem with AI isn't that it's a bad teacher (quite the contrary) but that it makes learning futile and unnecessary. It made sense to want to know as much as your teachers did back when the expectation was that they'd retire and you'd replace them. But it's an entirely different ballgame when your teacher is the one that's planning to replace you.
It was otherwise okay, but the number of em-dashes and lack of elan vital really ruined my enjoyment of your article.
This is something that I’ve been thinking about a lot. LLMs like Open Evidence have been a godsend for info on nutritional and medical science. But it still feels like cheating to use LLMs to brainstorm for philosophy ideas. Do you think philosophers should actively avoid using LLMs for brainstorming?
I need to think more carefully about the ethics, but my first pass is that no, it's not cheating if you're using an LLM as an interlocutor, no more than using Google. In general, it's prudentially wise to exhaust your own thoughts before seeking input from an LLM; that's how you achieve the iterative engagement that increases skill and quality of ideas. Ultimately AI will raise the baseline, and philosophical skill will be about what your intellect adds to the tool. Wdyt?
I think it’s plausible that the use of AI can be used to raise the baseline especially in the near future. It does strike me that there’s a trade off here. It seems like philosophers value the lack of efficiency that comes with brainstorming and generating ideas. There’s an aesthetic value of the thinking philosophers who thinking painfully hard and slow to develop new and original ideas. AI might erase that since one is able to engage with it at a much faster rate.
For me, hard to see that as a loss. Philosophy should be a slow cook field insofar as that leads to producing good ideas, not for its own sake.
I think so too. The worry is that future philosophy student may do more than just enhance their thinking with AI but just offload it. This will depend on how good AI gets at philosophy (I do phil lang and ChatGPT is quite bad at it), but it is a worry nonetheless.