Why We're Afraid to Talk About Ethics in AI
Fear of being political. Fear of being wrong. Fear of slowing down. The social barriers preventing the most important AI conversations from happening.
I facilitate a lot of AI conversations with business leaders, professionals, and teams. In every session, there is a moment where the conversation could go deeper — where someone has started a sentence that points toward a values question, an ethical tension, a genuinely difficult choice — and then pulls back. They finish the sentence with something safer. The moment passes.
After enough of these moments, I started asking directly: why did you not say what you started to say? The answers converged around a small number of themes.
Fear of being political
Ethics in AI is inherently political in the sense that it involves contested values, power distributions, and trade-offs between competing goods. In a business setting, raising political questions feels risky. You might be seen as naive, or as having an agenda, or as prioritizing personal values over professional judgment.
The cost of this avoidance is paid downstream, when the ethical issue that was not discussed becomes an organizational crisis, a reputational problem, or a harm that could have been prevented. The conversation that was "too political" in the planning stage becomes very political indeed when it appears in a news headline.
Fear of not knowing enough
AI ethics feels technical to many people — an area that requires specialist knowledge before you can weigh in. This is partially true: some AI ethics questions do require technical understanding. But many of the most important ones do not. "Is it fair to use AI to screen job applicants without telling them?" does not require a PhD in machine learning. It requires moral attention, which everyone has.
Waiting until you know enough to weigh in on ethics is a way of never weighing in. You know enough. What you might not have is permission — from yourself, from the culture around you — to treat the question as yours to engage with.
Fear of slowing down
Perhaps the most honest barrier is simply this: ethical conversations take time, create uncertainty, and sometimes lead to conclusions that are inconvenient. In organizations where speed is rewarded and hesitation is costly, the ethical conversation can feel like friction rather than value.
The answer to this is not to pretend that ethical conversations are fast. They are not. They are the kind of work that is slow now and saves enormous time and pain later. Building the organizational culture that values that kind of slowness is itself a strategic choice — one that the AI era makes more urgent, not less.