Critical Thinking
Interrogating AI outputs — and your own assumptions — before accepting them as true.
What this skill is
Critical thinking, in the context of AI, means being willing to examine what you're being told — by the AI, by a source the AI cites, by a conclusion that sounds logical — rather than accepting it because it sounds right. It's the habit of asking: "What would have to be true for this to be wrong? What am I not being shown? Whose interests does this framing serve?"
This isn't paranoia. Most AI outputs are roughly right. Critical thinking isn't about rejecting everything — it's about having a filtering function rather than being an open pipe. The child who asks "wait, how do we know that?" before accepting a confident AI answer is developing a muscle that will protect them from manipulation, misinformation, and mediocre work for the rest of their life.
Why it matters in an AI world
AI makes critical thinking harder in a specific way: it packages uncertain claims in confident language. A book that says "maybe X" and a news article that says "definitively X" feel different. An AI that says both things says them in the same calm, authoritative tone. The confidence signal is gone. Which means your child has to evaluate the content itself — not rely on tone, hesitation, or source signals — to know how much trust to extend.
There's also the question of framing. AI outputs always reflect choices: what to include, what to emphasize, what to leave out, what perspective to lead with. A child who doesn't notice framing can receive a completely accurate set of facts and still walk away with a misleading picture of reality. Critical thinking means asking: "What's not in here? What would this look like from a different angle?"
And then there's the AI-specific problem of hallucination: AI will sometimes state invented facts with complete confidence. The failure mode isn't obvious. It looks like knowledge. The only defense is a child who asks, regularly: "How would I check this?"
What it looks like in your child
- Before accepting an important claim from AI, they ask "how could I verify this?" — and sometimes actually do
- They notice when an argument has a missing step and ask about it rather than filling it in themselves
- They can articulate the perspective that *isn't* represented in an AI answer, not just the one that is
- They distinguish between "I believe this is true" and "AI told me this is true" — two genuinely different epistemic states
Challenge: Try this this week
Trivial Pursuit Confident & Wrong. Play any trivia game — Trivial Pursuit, a phone app, anything. Before each answer, players must rate their confidence from 1–5. After the round, map confidence against accuracy: when were you most confident and wrong? Discuss: what felt like certainty but wasn't? This is exactly the experience of trusting overconfident AI. Then do the same with one round of AI-generated trivia: ask AI 10 questions about a topic you know, and find the errors together.
What to watch for
- Confidence calibration collapse: They're either overconfident (accepting everything AI says) or under-confident (rejecting it all as unreliable) — unable to apply selective trust based on actual evidence quality
- Frame acceptance: They absorb the framing of AI answers without noticing that a different framing is possible. If AI frames a historical event as primarily about economics, they don't ask "is there a different frame?"
- Verification avoidance: They know they should check AI claims but don't because it's inconvenient. This is different from not knowing — it's a willingness gap, and it needs a different intervention than a knowledge gap.
Games that develop this skill
Trivial Pursuit Confident & Wrong — Players rate confidence before answering. Track where calibration fails: high confidence + wrong answer is the signal to study. Builds awareness of the gap between feeling right and being right.
Catan Evidence Ladder — During resource negotiations, each player must justify their offer with one piece of evidence (board state, probability, strategic position). Others can challenge the evidence before accepting. Builds the habit of demanding justification before agreement.
Fact, Inference, or Opinion? — Read any paragraph from a news article or AI output together and classify each sentence: is this a fact, an inference, or an opinion? Disagreements are the most valuable moments. Do this for 10 minutes with something your child read this week.
See all nine skills
Each skill builds something distinct. Browse the full set to see where your child might have gaps.
All Skill Builders →