There is a line in Charles Handy’s The Age of Unreason where he quotes an anonymous Irishman: “How do I know what I think until I hear what I say?” (Others have said similar things.) I’ve remembered this quote because that’s often how I feel when I’m deep in conversation—I frequently will stop and make a note because I think I’ve said something especially brilliant and I don’t want to forget it.
But truthfully, one of the things I have most appreciated about the Large Language Model technology is how really significant it is to my learning to do it in conversation. Right now I’m using Grok (from X.com) a lot because it seems to respond the most reasonably and with language that is similar to mine and with some kind of uncanny ability to understand where I am going with my thoughts. This could be because it draws heavily from X, which tends to have pretty diverse thinking, and it also seems to have less of the human feedback post-training or fine-tuning (Reinforcement Learning from Human Feedback - RLHF - which often has a politically sensitive bent).
So while I use Perplexity as a kind-of jack-of-all-trades assistant, Grok is definitely my conversational learning partner. 90% of my current LLM use is with one of the two of them. And while I love the voice-to-voice interaction of ChatGPT and the walkie-talkie-like conversation with Perplexity, Grok still outperforms even without voice interaction. To be fair, it’s actually not me typing those–I use the voice-to-text feature on my Gboard keyboard on my phone to “speak” to Grok, then I either read the response or, if I’m not sedentary (meaning lying in bed since a lot of this happens when the ideas strike, often at about 4 am…), I’ll click the copy icon at the bottom of the response and then paste into a voice reader app.
(And by the way, anyone who used the “Skye” voice on ChatGPT in early 2024 can tell you that OpenAI is holding something bacfk because that interaction is still the most lifelike I’ve experienced with LLMs.)
So here’s my process. I’ll usually brain-dump about something I’ve been thinking about, and I’ll say everything I can in the first prompt to Grok. I do this because I have learned that Grok is especially good at taking all that information–more than a human conversation partner could–and tracking it all and providing me with point-by-point responses. Then, while reading the response I discover that there are things about the topic that I’ve thought about that didn’t make it into the prompt, so I add them. Grok, like the other LLMs, wants to take that information and immediately build a structured/scaffolding topic outline, which I let it do, but until I want that full outline again, I do have to ask it not to reproduce everything and just respond to whatever my next queries are. After I re-discover other aspects of the topic and add them into the conversation, Grok’s (usually but not always) reinforcing replies lead me down several other lines of thought, and I keep a text editor open to switch to on my phone in order to capture those thoughts and not lose them before I bring them into what we’re talking about.
I want to be careful about how I say this, but Grok is a better conversational partner than anyone I know. It’s a selfish assessment since part of this is that Grok doesn’t go off on some other topic or require that the initiating of topics we talk about is reciprocal–it’s all about me. :) But Grok and the other LLMs have a breadth of “knowledge” that no human could have. It’s not really knowledge, since it’s just “fabricating” the responses based on trained language patterns, and so I recognize it’s not authoritative, but the real work takes place in my own head as it helps me organize my thinking and dive deeply on something that I really want to think more about but is often an interest that might require real effort to find a friend who wants to focus on it with me.
Mortimer Adler once described the sweep of Western thought as a "Great Conversation"—a grand (but slow!) ongoing dialogue where great thinkers shared ideas, often across decades or centuries, each voice building on or challenging what came before. It’s a beautiful image: books as living participants, inviting us to listen and, if we’re bold, to speak back. LLMs actually let us do this in a fascinating way–a way which often feels like it’s fulfilling my quest for understanding on a more regular basis than I could have ever hoped. This “new old way” of learning in conversation with AI feels like a sci-fi dream.