What My Son Taught Me About Prompt Engineering
Children ask better questions than most professionals. They haven't learned to accept the first answer. That's a superpower we've forgotten.
My son is eight years old. He does not know what a large language model is. He has never read a guide to prompt engineering. And yet he is, in the specific way that matters most, better at interacting with AI than most of the professionals I coach.
The reason became clear to me on an ordinary Tuesday afternoon, when he was sitting next to me as I worked. I showed him what I was doing — asking an AI tool to help me research something. He watched for a moment, then said: "But why does it think that? Did you ask it why?"
I had not. I had accepted the output because it looked reasonable. He, with no training in critical thinking, no background in epistemology, no experience with AI whatsoever, had immediately identified the missing question.
What children do differently
Children have not yet learned to accept the first answer. They have not yet developed the cognitive habit of satisficing — choosing the first option that meets a minimum threshold rather than continuing to search for better ones. This habit is adaptive in adults who face time pressure and cognitive load. But it is devastating in the context of AI interaction, where the first answer is rarely the best one.
Children also have an unselfconscious willingness to ask "but why?" and "what if?" and "what happens if you change that part?" They do not worry about looking ignorant. They are not trying to appear efficient. They follow their genuine curiosity without the social filter that adults have developed.
These are, precisely, the behaviors that define excellent prompt engineering. Not the technical frameworks. Not the chain-of-thought methodologies. The willingness to stay curious, push past the first answer, and ask the obvious question that nobody else asked.
How we lose this
The suppression of this natural questioning tendency is one of the most significant costs of traditional education and socialization. Schools reward correct answers, not productive questions. Workplaces often punish apparent confusion and reward apparent competence. Over time, the natural "but why?" reflex gets replaced with a nodding acceptance that protects social standing at the expense of genuine understanding.
The professionals I work with who struggle most with AI are often the most experienced. Not because experience is a disadvantage, but because experience has taught them to reach for answers quickly and efficiently. Efficiency in a world of scarce information is a virtue. In a world of abundant AI-generated information, it can become a liability.
Reclaiming the beginner's mind
The Zen concept of "beginner's mind" — shoshin in Japanese — describes the quality of approaching familiar situations with the openness and lack of preconceptions of a beginner. It is not about being uninformed. It is about being willing to be surprised.
In the context of AI, beginner's mind means asking: "What if I approached this differently? What am I assuming that I don't have to assume? What would a curious child ask here?" It means giving yourself permission to push past the first reasonable output and keep exploring.
My son did not teach me this. He reminded me of it. I had known it once, and slowly unlearned it. Working with AI is, among other things, an invitation to learn it again.