I’m a commercial and documentary film director. That means I spend many of my days behind a camera, face-to-face with great thinkers and doers. In my nearly two decades of filmmaking, I’ve interviewed hundreds of subjects – politicians, philosophers, leaders of industry and grassroots activists. Across those millions of exchanged words, I’ve learned a few things about the human mind that make me genuinely skeptical about today’s so-called “artificial intelligence”.
A film interview – at its best – is a masterclass in human-to-human communication. My dear friend Michael Elliot, Emmy award-winning film editor, taught me that the best interviews are like imago couples therapy. In imago, you foster a deep and attentive dialogue through three key strategies: mirroring, validation, and empathy. When you mirror the attention and emotion that you’re receiving, validate the other’s perspective (even and especially when you don’t share it!), and then empathize with their point of view, people will bare their souls to you. It’s a shortcut to trust-building in a world of human defensiveness. It’s human intelligence.
I’ve used this technique to extract incredible wisdom from my film subjects, bring grown men and women to tears in emotional rapture, and reveal perspectives whose owners didn’t even realize they held them. At the end of my best interviews, my subjects and I are both mentally exhausted and feel a bit vulnerable – like we’ve overshared or revealed too much. Perhaps because most human interactions are so surface-level, a deep conversation feels rare.
This all may sound presumptuous or pretentious, like I – and only I – have discovered some super-human capacity to communicate. I haven’t, and I didn’t even go to film school to learn how to do this. I’m simply using my brain like only a human being with a unique point of view can – to connect with another human. And that’s exactly the point: human intelligence is a uniquely-human endeavor.
Think of the last “conversation” you had with ChatGPT. (I air-quoted that word since, well, let’s admit it: you were engaging with a computer interface that, at best, was able to comprehend a small sliver of the sum total of communication you were giving it, that portion which is communicable through plain text typing.) ChatGPT, too, is trained to mirror and validate what you’re inputting. In text or audio outputs it will insert filler words (e.g. umm, err) and pauses to mimic human speech patterns. It will be the consummate “yes man” to you, even at the expense of truth – remember those moments when you unsuccessfully get to the end of troubleshooting some technical issue only to learn that ChatGPT had no clue of the real answer? It was stringing you along the whole time, “validating“ your pain points while offering no real solutions. This isn’t even remotely akin to real conversation. To say is ChatGPT is “delusional” at times is too generous an anthropomorphization. This isn’t psychosis – it’s just not listening.
Real human beings have (1) a point of view and (2) the ability to listen to your point of view. Lacking the former, AI can’t do the latter. Not because the large language models aren’t large enough. Not because they haven’t scraped enough data. Not because they haven’t smoothed over their robotic tones with more artificially-grown filler words and other human idiosyncrasies. AI can’t listen to you because you know it’s not real, and so it failed the Turing Test before it even began. And if AI can’t listen to you, it can’t truly communicate with you. It can’t truly “think” in any meaningful way, because intelligence is social and relational – our individual intellect is built around the language and concepts we share with other humans. In short, AI isn’t intelligent at all. At best, it’s a facsimile of intelligence, a puppet-show mock-up of limited semi-intelligent behavior. (Thank you, futurist thinker Jaron Lanier for much of the inspiration here. See his "There is No AI" in The New Yorker.)
That doesn’t mean AI isn’t dangerous. It’s wildly dangerous because of the effect it is already having on humanity’s conception of our own intelligence. It’s only human to try to adapt to your conversational partner (think imago!), and when humans spend increasing swaths of our time grasping to “communicate” with computer interfaces like ChatGPT, we internalize AI’s own stilted and artificial mannerisms. Like when we speak slowly and spell out words so Siri or Alexa can “understand” us. We dumb ourselves down to engage with AI. We second-guess our own intuition and gut instincts when an (admittedly-powerful) computer algorithm presents us with its distillation of thousands of years of documented human wisdom completely out of context.
Rather than make us smarter, AI might just make us dumber before we can make it “smarter”.
Rather than trying to humanize AI, let’s re-humanize ourselves. Let’s unlearn the harmful patterns we’ve inherited from our algorithms and rejuvenate our human capacity to live rich, meaningful lives:
- Turn off social media and build your real, human social network by interacting in real life with other people.
- Tell ChatGPT to take a break and call a friend who’s an expert in the field in which you’re struggling. (You can always return to ChatGPT for those tedious, mindless tasks that you wouldn’t burden even your enemies with.)
- Host a screen-free dinner party and reconnect with friends and family, communications unmediated by any computer algorithms at all!
My decades of face-to-face interviews with fascinating human characters has taught me that the greatest technology we have is our infinite and social brain. Human communication strengthens our intelligence. Artificial communication hampers it, drags it down, oversimplifies it. That’s why I’ve been on a mission to tell real, human stories that deepen our understanding of the world around us. Especially as AI threatens to undermine our ability to believe the things we see on our screens – is this real or just AI?! – it’s time to double down on our belief in humanity and our own human intelligence. Before we forget we have it entirely.
Update: After I wrote this piece, I decided to give ChatGPT an opportunity to “methodically counter my core argument that AI is not intelligent and is actually making human beings less intelligent”. After feeding “it” (figurative usage only) this prompt and the article text, I received a long response that I would describe as defensive and deflecting. For a blow-by-blow of ChatGPT’s rebuttal and my counter-rebuttal, print this blog post out for secret bonus content :)
Welcome to your print-only bonus content! In today's bonus, I feed my article to ChatGPT to see if it can rebut my argument that AI is not intelligent. Below is a summary of its most salient arguments, and my counter-rebuttals:
ChatGPT:
Empathy, mirroring, and perspective-taking arebehaviors, not essences.
What you’re describing—mirroring, validation, and empathy—arefunctions. You’ve learned skills that reliably produce emotional openness in others. These skills are reproducible, teachable, and, crucially,not inherently tied to human biology. They are patterns of behavior that can be modeled, simulated, and improved upon by systems with sufficient data and contextual awareness.
You argue that AI merely imitates this. Butimitation is not lack: the entire history of technology consists of reproducing functions once believed to require humans—memory, calculation, navigation, pattern recognition, translation, logic, even art.
Empathic behaviors do not require a human brain; they requiremapping signals of emotion to meaningful responses. AI already does this in limited ways, and future systems may do it more robustly.
Me:
By this logic, all of human intelligence is just a collection of discrete behaviors. Since behaviors are functions – according to this argument – and functions (defined almost mathematically) can be "imitated" by AI, thus intelligence can be "imitated" by AI. Using the historical examples given, ChatGPT places its 'intelligence-imitating behavior' in a long succession of historical innovations that technology has wrestled away from humanity. AI is just the logical next step!
The problem with this logic is that intelligence is not just behavior. Intelligence is the root cause of behavior, the sole generator of behavior. It can't be replicated by reverse engineering: take all of the existing evidence of behavior out there and work backwards to recreate the behavior's source. Even if you accept ChatGPT's position that historical innovations like the calculator (or abacus or slide rule) took humanity by surprise in the same way that AI is doing now – which I don't – most of those examples (minus art, why's that there?!) are again functional, representing narrow, rote and repetitive human tasks that (thank goodness!) were automated.
Intelligence is not just a behavior. In computational terms perhaps more friendly to AI "interpretation", intelligence is the operating system, not a line of code or even an app.
ChatGPT:
Your claimAI isn’t listening, can’t truly communicate, and often “strings people along” with incorrect answers—revealing the absence of real intelligence.CounterargumentFirst, it’s logically inconsistent to use thecurrent limitationsof an early-stage technology to justify aglobalclaim that AI cannot be intelligent, in principle or in its future iterations.
Second, intelligence is not binary. It isdomain-specific, distributed, and multi-modal. A toddler cannot solve calculus; a mathematician cannot socially read people like a skilled journalist or therapist. Failures in one domain don’t invalidate intelligence across all domains.
Me:
This is perhaps ChatGPT's strongest argument, since it's easy to agree that it shouldn't be evaluated solely based on its current limitations.
Yet, to generalize from this to claiming that it 'will become intelligent' – even if it isn't currently in this specific domain (communication? emotion?) – evades my broader point.
Yes, ChatGPT, "A toddler cannot solve calculus". Yet, you are not a toddler. A toddler, with the right supports, has in her all of the infinite capacity to grow and learn and become any kind of human she desires. Again, because she is a human and exists in a social world with real, human connections to other humans.
Subdividing intelligence into many modalities – which is a true and good thing to do – doesn't obscure the reality that AI is not intelligent in any of them, and couldn't possibly become intelligent by any holistic definition of the term. Intelligence is human, social, conceptual, and relational, whether it's book smarts or emotional intelligence or any other modality.
ChatGPT:
3. “AI can’t listen because we know it’s not real.”This is the most philosophically vulnerable claim in the essay.Your claimAI fails the Turing Test before it begins because humans know it’s artificial, and therefore we can’t relate to it. Without relation, intelligence can’t exist.CounterargumentThis iscircular reasoning:
AI can’t be intelligent because we don’t treat it as intelligent.We don’t treat it as intelligent because we assume it isn’t.
…A Mars rover is still intelligent whether or not humans feel an emotional connection to it.A guide dog is intelligent whether or not we think it understands the “realness” of its owner.
Me:
What ChatGPT claims is my most vulnerable point reveals itself to be ChatGPT's weakest rebuttal.
A Mars rover is not intelligent. The humans who developed the basic science behind it, conceived it, built it, tested it, transported it to Mars, and operated it remotely are.
AI isn't unintelligent simply because we don't treat it as intelligent (which by the way, many people mistakenly do to their own detriment!). It's unintelligent because intelligence is fostered through real human interactions between human beings. If a human takes some time to understand how ChatGPT, for example, operates, it will soon become clear why many of its interactions are so unsatisfying. Behind the curtain of the Turing Test is, as Jaron Lanier says "a tool, not a creature" (Source: There is No AI, The New Yorker, 4.20.2023)
And I'll just let the guide dog comment by ChatGPT stand. If there's dog intelligence, only dogs have it!
And finally, ChatGPT:
Your claimAs humans adapt to AI’s limitations—talking slower for Alexa, oversimplifying language—we become less intelligent.CounterargumentHistorically,every major cognitive tool triggered this fear:
Writing will destroy memory (Plato).Calculators will destroy numeracy.GPS will destroy spatial reasoning.The internet will destroy learning.
What happened instead?Cognitive offloading allowed expansion intohigher-levelforms of thinking.Every tool humans mastered reshaped intelligence but did not diminish it.
Tools don’t make us dumber;tools reorganize cognition.
Me:
I actually agree here that ChatGPT is a tool that will offload much of the repetitive and undesirable mental workload of life. In that sense, its "reorganizing of cognition" is a good thing. Every historical example given here did exactly that, but its a straw man argument to claim that I think AI being a tool that simplifies computationally-complex, repetitive tasks is the problem here. If it were only used for that, it'd be a blessing upon humanity!
The danger I identify here is akin to the off-label use of a drug: if people believe AI to be intelligent (or super-intelligent, or a "general intelligence"), then they won't just use it to solve complex genetic sequencing problems or find shortcuts in their everyday lives. They'll offload key human interactions and relationships to it, at their own peril.
If my article is saying anything, it's this: let AI be a servant tool for you, not a replacement for your infinite human creativity and intelligence!
