The Power of 'I Don't Know' in AI Conversations
Admitting uncertainty is a leadership superpower. How intellectual humility makes you a better AI user, a better leader, and a better human.
In most professional contexts, "I don't know" is a phrase that carries risk. It can signal incompetence, unpreparedness, or the kind of uncertainty that erodes confidence. We learn, early in our careers, to reach for confident-sounding responses even when our confidence is not fully warranted.
AI has changed this calculus, in an unexpected way.
What AI makes visible
When everyone has access to AI tools that produce confident, comprehensive-sounding responses, the person who says "I don't know, but here is how I would think about finding out" stands out in a new way. They are demonstrating something that AI cannot demonstrate: intellectual humility combined with genuine curiosity.
This combination is rare and valuable. Intellectual humility, without curiosity, is passive. Curiosity, without intellectual humility, is often just confidence dressed as exploration. Together, they produce the thing that actually generates novel insight: the genuine openness to being surprised by the answer to a question you cared enough to ask honestly.
In leadership
Leaders who say "I don't know" appropriately — meaning genuinely, in response to things they genuinely do not know — create something valuable in their teams: psychological safety around uncertainty. When the leader is willing to not-know, the team becomes willing to not-know. When the team is willing to not-know, they ask better questions, raise earlier warnings, and bring more honest information upward.
The leader who maintains the performance of omniscience produces the opposite: a team that tells them what they want to hear, that filters bad news, that rounds up their confidence to match the leader's apparent certainty. In a rapidly changing environment, that information environment is genuinely dangerous.
In AI interactions
The practice of "I don't know" extends to AI interactions in a specific way. The best AI users I know are the ones who approach each interaction with genuine openness about what they might be wrong about — who use AI not to confirm what they already think but to genuinely test it.
"What am I missing in this analysis?" "What would a skeptic say about this approach?" "Where is this argument weakest?" These prompts require intellectual humility — the genuine willingness to receive an answer that challenges your position — and they produce significantly better AI outputs than prompts designed to confirm what you already believe.