Tuesday, May 20, 2025

The Zika Virus and the Limitations of AI Reasoning

As a high school exchange student in Brazil many years ago, I fell in love with the country and its people. So when reports emerged in 2014 of babies born with microcephaly (abnormally small heads causing irreversible damage) in one Brazilian region, linked and the attributed to the Zika virus, I paid close attention. But the story didn’t add up. Why would Zika, endemic across South America, cause birth defects in just one area? The question stuck with me and a few weeks ago I turned to the large language model (LLM) Grok to investigate.

I chose Grok for its relatively fewer guardrails compared to other LLMs. As I expected, it initially echoed the official narrative, shaped by public materials and language frequency. But after a couple of hours of asking very specific questions and drilling down on inconsistencies, we uncovered a confluence of events that gave outlines of a potential explanation that did make sense: Rio Olympics preparations and worry about public perceptions of Brazil, a president facing impeachment, an larvicide introduced into water supplies which had not undergone human testing, local Brazilian news reports of untrained workers overdosing tanks, residents’ concerns about water appearance, a damning lack of any of the required water testing, reports of pressure on health officials to avoid contrary investigations, and a dismissed rat study linking the larvicide to microcephaly-like defects. Wow. 

I’m not prepared to say that I really know what happened to cause those birth defects, but I think I have a pretty likely hypothesis. (Not having the legal budget of a large investigative newspaper, I’m not prepared to take the story any further, but my view of the world and how it works has been enlightened.)

Using this particular investigation as a starting point inspired me to create prompt guidelines for using LLMs to counter the “Overton window” effect of dominant narratives, to spot misinformation, and to recognize cognitive biases that are exploited in propaganda. More on that to come. 

In this short post, however, I want to focus on what I learned about AI’s struggles with extrapolation, which is one of several reasoning tasks LLMs are not built for, alongside causal, abductive, analogical, counterfactual, and critical reasoning.

Historical and investigative research often involves piecing together incomplete or contradictory data to hypothesize motives or connect dots. This requires extrapolation. LLMs can summarize known details and identify patterns, but they falter at reasoning beyond their training and at discerning causality. Their language fluency can mislead users, including and maybe especially students, into mistaking polished answers for insight, potentially reinforcing manipulated narratives instead of uncovering truths. History shows that official stories frequently diverge from likely events, a nuance that LLMs struggle to capture.

Recognizing this limitation actually offers an opportunity. Educators can design questions and exercises that highlight AI’s reasoning weaknesses, thereby fostering human reasoning skills—extrapolation, critical thinking, and synthesis—which are largely at the heart of a good education. By understanding what AI cannot do, we can better appreciate what makes human inquiry unique.


Monday, May 12, 2025

The Paleolithic Paradox: Why AI Is Not Like Us

The more I chat with large language models like Grok and ChatGPT—my go-to conversational partners these days—the less I fear a Skynet-style AI uprising. Instead, I’m struck by a stranger truth: AI’s emergent synthetic intelligence isn’t just different from ours; it’s fundamentally different in ways we’re only beginning to grasp. Let me unpack this through what I call the Paleolithic Paradox.

For roughly two million years, during the Paleolithic era, our brains evolved to survive a simpler but also brutal and unpredictable world. Our cognitive “hardware” was wired to hunt, scavenge, and navigate tight-knit social groups. Our “software”—the subconscious habits formed in childhood—absorbed language, cultural norms, and survival instincts to keep us safe within the tribe. This wasn’t about logic; it was about staying alive.

Here’s the paradox: our minds, forged for a Stone Age world, now navigate a modern one. Consider our cravings for fat, salt, and sugar—scarce then, abundant now. These evolutionary relics drive choices that don’t always serve us, and are consistently exploited by corporations who know how to trigger our deepest desires. Our cognition works similarly. We’re not wired for pure rationality. Our decisions are shaped by emotional cues—chemical signals that push us to act fast, often irrationally, to survive or belong. Psychologists have cataloged our cognitive biases—groupthink, confirmation bias, and more—that aided survival but cloud our judgment today. We’re less Mr. Spock, more Captain Kirk, swayed by gut feelings and tribal instincts. And let's be clear--our instincts have led to some terrible atrocities even in what we call the modern era.

Now, contrast this with AI. Large language models like Grok have no biology—no adrenaline, no dopamine, no evolutionary baggage. Their intelligence, which I’d argue is emerging synthetically, stems from computational complexity, and comes out of being trained on vast datasets to generate language with uncanny fluency. But it’s not like human intelligence. It doesn’t feel fear, loyalty, or the pull of conformity. It lacks a subconscious shaped by a Paleolithic childhood. Where our intelligence is emotional and heuristic-driven, AI’s is logical, probabilistic, and detached.

This flips our assumptions about AI’s future. We often imagine artificial general intelligence (AGI) as a supercharged version of human cognition—smarter, faster, but fundamentally like us. What if AI’s path is entirely different? Free from the Paleolithic pressures that shaped us, it won’t inherit our biases, tribalism, or emotional reasoning. It won’t “want” to seize power because it doesn’t “want” anything. It simply is—a language-based intelligence operating on principles that its creators are still struggling to understand.

But I’m not complacent. If AI won’t turn sentient and rebel, it’s a tool in human hands—and that’s where the danger lies. As AI excels at analyzing and predicting behavior, who wields its power? Corporations exploiting our evolutionary triggers for profit, like social media algorithms that hijack our dopamine loops? Governments nudging behavior or spreading propaganda? Individuals with hidden agendas? The more AI can shape our beliefs and actions, the more power it grants those who control it. This isn’t a sci-fi dystopia; it’s a human one, rooted in the same Paleolithic instincts for dominance we’ve carried for millennia.

I think of Mortimer Adler’s “Great Conversation,” the centuries-long dialogue where thinkers built on or challenged each other’s ideas. AI lets us join this conversation in ways Adler couldn’t have imagined, but it also forces us to confront our nature. We’re not logical machines; we’re messy, emotional creatures shaped by scarcity and survival. AI, unbound by that crucible, isn’t like us—and that’s the point. AI’s synthetic version of intelligence can teach us more about our own.