They Know More About AI Than We Do
Why the AI cheating crisis is really a motivation crisis and what schools can do about it
A teacher (let’s call him Mr. S) said this to me recently, not out of panic, just resignation:
“They [students] know more about AI than we [teachers] do.”
We were talking about how different the classroom feels today: the sense that something has quietly shifted, that assignments don’t land the same way, and that trust is harder to come by.
This isn’t just about ChatGPT or Gemini or whatever’s next. It’s about what happens when students are fluent in tools that most adults barely understand—tools that can do the work for them. These students may not always be mastering civics, composition, or problem-solving at the rates we hope, but they are smart. Just remember how quickly they adopted disappearing messages on Snapchat to dodge parental oversight. Now, with AI, they’re often one step ahead of their teachers too.
To be clear: using AI isn’t inherently wrong, it can be a powerful learning partner. But copying and outsourcing all the thinking is different. And right now, many students aren’t using these tools to learn, they’re using them to check a box because that’s what the system inadvertently taught them to do.
The real problem isn’t the technology, it’s the motivation.
When school feels like a high-stakes points game, students do what any rational player would: look for the fastest path to a high score. AI just happens to be the most efficient shortcut. This is the core issue: not the tools, but the incentives. We’ve built a system where momentary snapshots, i.e., grades, carry more weight than the full arc of learning and growth.
I was recently talking to an investor I really respect about ClassWaves, a tool we’re building that helps teachers see student thinking in real time by capturing peer dialogue. And he asked me,
Why would a school adopt a technology focused on enabling more peer dialogue to drive deeper learning and critical thinking? What’s the killer use case, the thing they genuinely can’t live without?
He wasn’t dismissing the idea. Like me, he believes in the value of helping students become more thoughtful, collaborative, and capable of navigating complexity. He wants young people to graduate not just with content knowledge, but with the ability to think critically, engage in civic life, and build real understanding through conversation. But he also understands how decisions get made in schools, where budgets are tight, classrooms are full, and school leaders have to choose between tools that promise measurable impact, save time, or simply reduce daily chaos.
This conversation reminded me of a book I read years ago called Half the Sky, by Nicholas Kristof and Sheryl WuDunn. In this book, the authors argue that supporting women’s rights isn’t just a moral imperative, it’s an economic one. And that investing in women and girls unlocks not just dignity and justice, but economic growth. In other words: even if you don’t care about equity, the numbers should still convince you.
The truth is, a mission-aligned vision isn’t always enough. Teachers and school leaders overwhelmingly do care about students. They chose to work in education because they want kids to thrive, to learn, to become good humans. But they also face shrinking budgets and impossible pressures. They can’t say yes to every tool so the solution has to solve a hair-on-fire problem.
Right now, that hair-on-fire-problem is evaluation.
Over and over, in dozens of conversations with teachers, one theme keeps surfacing: grading and evaluation are the most exhausting, broken parts of the job. Mr. S told me that student use of AI has only made things worse. Many of his collegaues feel they have to be on constant alert for cheating—not because it’s everywhere, but because it’s unpredictable. And that uncertainty erodes trust.
Imagine this: you read a beautifully written essay, and your first reaction isn’t pride or curiosity, it’s doubt. Did they write this? Or did they just get good at hiding it?
The signals teachers used to rely on, like voice, effort, and process, are harder to see. And the tools that claim to help misfire often and have real consequences. A 2024 study of six leading AI detectors found their average accuracy slid from an already‑weak 39.5 % to just 17.4% when students lightly paraphrased AI‑generated text (GenAI Detection Tools, Adversarial Techniques and Implications for Inclusivity in Higher Education). And furthermore, as always, students can find a way around guardrails: AI humanizer services such as HumanizeAI and QuillBot openly market themselves as ways to slip ChatGPT output past every detector on the market.
So we fall back on older methods, like handwritten essays. But these strategies just add stress, extra time to grading, and ultimately move us backward, away from the kinds of thinking and expression we want students to grow into.
It’s tempting to think that adding more guardrails will fix it. That more constrained bots or “closed” AI systems that don’t give the answer right away, but ask scaffolded questions will keep students honest. But students are already jailbreaking these tools. A recent study found that nearly half of students disable guardrails within a few attempts (Exploring Student Behaviors and Motivations using AI TAs with Optional Guardrails).
If students aren’t motivated to learn, if they don’t believe the process matters, they will always find a way around the system.
What we need isn’t more enforcement, we need to change the game. There’s a different path, one that doesn’t rely on high-stakes artifacts or last-minute detection. Instead of asking students to prove what they learned after the fact, what if we captured learning as it happened?
This is where technology can actually help. Not as a watchdog, but as a window. Small, authentic signals like student talk during group work, conversational turns, emerging ideas, and real-time misunderstandings can give teachers a fuller, more accurate picture of student thinking as it unfolds.
These are the kinds of moments that AI can’t fake. And more importantly, when students know these contributions count, they have a reason to engage again. Not just to perform, but to participate. This isn’t a workaround for cheating. It’s a shift back to the purpose of school: to help students think, struggle, connect, and grow.
Right now, a lot of edtech is focused purely on evaluation. And sure, some of it is needed. But swapping the essay for something temporarily AI-resistant doesn’t solve the deeper problem, it just kicks the can down the road.
What we need is more fundamental: a new approach to motivation, and a new way to see learning. One that values process over product, and effort over shortcuts. One that helps students believe there’s a reason to learn, not just a grade to earn.
If AI has disrupted old forms of assessment, maybe that’s not a crisis, maybe it’s the push we needed to design something better. Formative, real-time evidence is what good teachers have always wanted. The difference now is: we can finally build it at scale.
That’s the goal of the tool we’re building, ClassWaves, to capture the thinking and growth that traditional assessments miss, and make it visible when it matters most. It can be a tool to solve the current hair-on-fire problem of evaluation—but it’s also so much more. This isn’t about cheating anymore, it’s about trust and bringing back the kind of learning that doesn’t need to be defended or doubted. Maybe I’m being overly optimistic but I believe this is the kind of learning that students might actually want to do because they know it counts, and they know someone is paying attention not just to what they produce, but to how they grow.
If we want students to value learning again, we need to stop designing systems that only reward the shortcut. And if we want trust back, we have to start seeing the work that’s been there all along.
Are you seeing this shift too? I’d love to hear how you’re rethinking motivation and learning in your classroom, district, or product.


