Last week, I was in a classroom with a group of students that need more AI knowledge to augment what they learned in the regular curriculum. Personal Mission. Not the kind of “lecture hall” classroom, but the messy, buzzing kind, laptops open, coffee cups scattered, and a quiet background hum of ChatGPT tabs working overtime.
One student waved me over. “Wiemer, look, ChatGPT wrote my entire market analysis in two minutes.”
It looked good. Professional. Structured. Numbers, charts, even footnotes.
But then I asked the simplest question in the world: “Which of these claims are actually true?”
Blank stares. A nervous chuckle. A quick scan of the text.
And then that dawning realization: they didn’t know.
That’s when it hit me.
If you drop today’s students into the GenAI ocean without a life jacket, they won’t drown because they can’t swim. They’ll drown because they can’t tell the difference between a wave and a whirlpool.
And that life jacket? It’s called critical thinking.
Not the “write an essay about Socrates” kind I had to do in high-school. Not the dusty logic exercises you half-remember from first year. But the practical, AI-era version.
I watched as one student scrolled through the AI’s answer, eyes darting over the neatly formatted paragraphs. At first glance, it looked flawless. But as we read together, I pointed to one sentence. “Is that a fact, or just the AI’s opinion dressed up as fact?” She hesitated. Another line. “Notice how vague that is, what would you need to ask to make it precise?” By the third question, she was leaning forward, spotting the gaps herself. This is where good reasoning starts: learning to separate truth from fluff, to close the gap with sharper, more deliberate questions.
Later, another group showed me an AI-generated project plan. Impressive, until we traced the logic behind its recommendations. “Wait,” one student said, “step three doesn’t actually follow from step two.” Exactly. AI can sound confident while skipping over its own contradictions, and unless you know how to follow the thread, you won’t notice the knots.
Then there was the marketing forecast. Beautiful graphs, persuasive language, confident predictions. But confidence isn’t certainty. I asked, “Is this a rock-solid conclusion or just the AI’s best guess?” They went back to check the data, and discovered it was a guess built on shaky comparisons.
And finally, the social media example: a perfectly crafted quote from a “recent study.” Except the study didn’t exist. The quote was invented. The students looked at me, half amused, half alarmed. It took us 30 seconds to verify, but imagine if they’d shared it without checking. In a world where AI can create convincing fakes faster than we can read them, verifying before amplifying isn’t just a skill: it’s survival.
The 4 elements I want to put into the curriculum could look like this, very much work in progress:
Part 1 – The basics of good reasoning → AI prompt literacy & source sanity checks. Can you read an AI answer and separate facts from opinions? Spot the vague phrasing? Ask a better follow-up question to close the gap?
Part 2 – Deductive reasoning → Logic-checking AI outputs. If AI makes a recommendation, can you trace the chain of reasoning? Spot where it skipped a step? Catch the moment it contradicted itself?
Part 3 – Inductive reasoning → Judging AI predictions under uncertainty. GenAI is a master of probabilities, not certainties. Can you tell the difference between a solid conclusion and a “best guess dressed in confidence”?
Part 4 – Application → Surviving the flood of AI-generated misinformation. Deepfakes, fake studies, fabricated quotes—AI can make them in seconds. Can you verify before you amplify?
We teach math, so students can count their money.
We teach language, so they can express themselves.
But in the GenAI era, we must teach critical thinking so they can survive their own tools.
Because AI won’t replace students who can reason well.
But students who can’t? They’ll be replaced by the first confident chatbot that sounds smarter than they are.