Part 1: Why Logic Alone Doesn't Drive Change
Everywhere I go—industry conferences, webinars, faculty roundtables—the same question arises: What will artificial intelligence mean for the future of advice?
I think the curiosity is warranted. AI is evolving rapidly. It can analyze data at scale, generate insights in seconds, and even engage in coherent, persuasive dialogue. That last development—AI’s ability to reason and persuade—has prompted the most existential concern. If machines can “outreason” us, where do humans fit?
I understand the anxiety. But I think we’re asking the wrong question.
The question isn’t whether AI can present better logic than a financial advisor. In many cases, it can. The better question is whether logic alone is what drives real change. And in my experience—in client conversations, behavioral research, and the complexity of human life—it rarely is.
We don’t act because we’ve received the most logical argument. We act when the information resonates with our relationships, values, and lived experiences. That’s where human advisors matter most—not in providing flawless logic but in offering something AI cannot: trust, safety, and shared meaning.
Let me explain.
A recent article by Sander Van de Cruys in the Seeds of Science Substack outlined how AI chatbots successfully engaged conspiracy theorists in patient, rational debate and sometimes (and amazingly) even changed their minds. The bots didn’t use emotion or appeal to social identity. They offered calm, consistent, logic-based dialogue.
I have a Substack, too. I publish monthly, and insights like this can be delivered straight to your inbox. Click this link and subscribe. Thank you for reading and supporting my work.
It worked, and the reason is sobering: AI doesn’t get frustrated. It doesn’t get defensive. It doesn’t interrupt, dismiss, or try to “win.” It simply stays in the conversation, unshaken, unthreatened. In other words, it does something many humans struggle to do: it reasons patiently.
And that’s what’s most revealing. AI isn’t persuasive because it’s better than us at logic. It’s convincing because it avoids the emotional landmines that derail human reasoning—defensiveness, fatigue, and identity threat. In its neutrality, AI mirrors an idealized version of dialogue we humans often fail to achieve: one centered on understanding, not overpowering.
But that doesn’t mean it can replace us. If anything, it clarifies what only humans can do.
AI can keep someone in a conversation. But it can’t create the relationship that permits a person to change.
Take the story of Daryl Davis, a Black blues musician who spent years building relationships with members of the Ku Klux Klan. Through long, respectful conversations, he persuaded over 200 individuals to leave the Klan—not by out-arguing them but by offering human connection. He sat with them, listened, challenged, and stayed present. He created the conditions where transformation became possible—not through logic, but through relationship.
AI can’t replicate that. Reasoning might be cognitive, but change is emotional and social. It requires the presence of another person willing to listen without condemnation and stay engaged long enough to make new beliefs feel safe.
We see this in structured efforts like The Human Library—a global movement that lets people “check out” other people for conversation. Participants speak with those whose backgrounds, identities, or experiences are vastly different from their own. The goal isn’t to argue but to understand. And that’s precisely what makes it effective. People leave those conversations not with bullet points but with stories. They leave not just with new information, but with new context.
Again, this isn’t something AI can do. It can simulate a conversation. But it can’t generate trust. It can’t offer a facial expression, a change in tone, or a shared silence that means, “I understand.”
This is the heart of the matter: humans change in social environments. We change when the conditions feel safe. When the cost of change doesn’t feel like a loss of identity. When we feel seen.
Karl Dunn’s Undividing newsletter, another Substack,—where I first learned about Daryl Davis—puts it: People don’t change because they lose arguments. They change when the emotional and social conditions allow them to.
Which brings us back to advice. If financial advising were only about logic, AI would be poised to take over. But financial planning is never just about the math. It’s about making sense of that math within a broader narrative: a person’s values, relationships, goals, and context. These are inherently social dimensions. And that means advisors are, and will continue to be, essential.
Not because clients can’t get logic elsewhere, but because they can’t get resonance, reflection, and psychological safety from a chatbot.
Ironically, the rise of AI may reinforce—not reduce—the importance of human advisors. As AI makes financial tools more efficient and accessible, more people will seek help for the first time. And they’ll need more than logic. They’ll need a person to help them navigate decisions that affect not only their finances, but their identity and relationships.
So the risk isn’t that AI is getting too good. The risk is that we forget what humans do best.
Advisors who double down on their ability to listen, to build trust, and to reason socially will not just remain relevant—they will become indispensable.
In the next installment, I’ll explore why humans often reject the “right” answer—and why great advice has to go beyond what’s rational to become truly transformational.