WiseAIParentStart Here
Skill BuildersCognitive Agency
Skill 1 of 9

Cognitive Agency

Knowing what you actually want — and forming that into clear intent — before you hand anything to AI.

What this skill is

Cognitive agency is the ability to show up at an interaction — with an AI or with anyone — knowing what you actually want from it. It sounds simple. It isn't. The path of least resistance with AI is to open it, type something vague, and see what comes back. That habit, practiced thousands of times, trains your brain to outsource the hard part: forming an intention in the first place.

The flip side is someone who pauses before opening an AI tool and asks: "What am I actually trying to accomplish here? What would a good answer look like? What do I already know that should shape this?" That pause — thirty seconds of internal work before delegating — is cognitive agency in action. It keeps the human in charge of the direction.

Why it matters in an AI world

AI is extraordinarily good at satisfying stated intentions. The problem is that vague inputs generate vague outputs, and the user often doesn't know the output is vague until they've already based a decision on it. A child who asks "help me write about the Civil War" gets an essay. A child with cognitive agency asks "help me write a 500-word argument for why the Civil War's economic causes are underemphasized in most middle school curricula, aimed at convincing a skeptical teacher." These are completely different products, and the second child's brain did actual work before the AI entered the picture.

As AI becomes capable of doing more and more, the premium on knowing what you actually want increases, not decreases. When AI can do anything you describe well, the differentiator is the quality of what you describe. The child who grows into an adult with genuine cognitive agency — who can form clear, sophisticated intentions and translate them into precise direction for AI — will be capable of things that AI-dependent peers simply can't replicate.

There's also a deeper risk: children who habitually outsource intention-formation lose the ability to do it independently. They become good at responding to prompts — at answering questions, completing assignments, following directions — but genuinely uncertain when the direction has to come from inside them. You've probably met adults like this. You don't want to raise one.

What it looks like in your child

  • Before starting a project, they spend a few minutes writing down what they want the end product to look like before touching any tool
  • When AI gives an answer that's off-base, they can quickly articulate *why* it missed and redirect it clearly rather than accepting the bad answer or starting over from scratch
  • They distinguish between what they're asking for and what they actually need — and can explain the difference
  • They push back when asked to do tasks with unclear goals: "I want to understand what success looks like here before I start"

Challenge: Try this this week

The Pre-Prompt Pause. Next time your child is about to use AI for something — homework, writing, research — ask them to spend three minutes first writing down: (1) what they want as a final result, (2) what they already know that's relevant, and (3) what a bad answer would look like. Then they can open the AI. Compare the result to a previous time they just dove in. Did the three-minute investment change the quality of what came back?

What to watch for

  • Opens AI before knowing what they want: They use AI to figure out what to think about, rather than thinking first and then using AI to develop it — a subtle but important inversion
  • Accepts the first answer without evaluating it: If they never push back or redirect, they're not exercising agency — they're just receiving outputs
  • Can't articulate what they were trying to do: After completing an AI-assisted task, they can't clearly state what the goal was — a sign that the goal was never clearly formed

Games that develop this skill

Trivial Pursuit Prediction Market — Before each question, players must predict *how confident* they are (0–100%) that they know the answer. The goal isn't just to answer correctly — it's to accurately predict your own certainty. This builds the metacognitive habit of knowing what you know, which is the foundation of cognitive agency.

20 Questions (Strict Mode) — One player chooses any object; others can only ask yes/no questions. The twist: before asking, each player must write down what they're trying to figure out with the question (not just the question itself). This forces explicit intention before action.

Design Brief Before Build — In Minecraft or any building game, before touching the controls, write a one-paragraph brief: what you're building, why, and what makes it successful. Then build. Review afterward: did the brief shape the outcome?

See all nine skills

Each skill builds something distinct. Browse the full set to see where your child might have gaps.

All Skill Builders →