Students aren’t sneaking artificial intelligence into school. School is sneaking into AI.
By now, you’ve probably read countless hand-wringing headlines warning that chatbots will turn a generation of students into lazy plagiarists, incapable of original thought, dependent on a stochastic parrot to finish their homework.
This march-toward-idiocracy story is tidy, familiar and comfortably backwards. We built an education system that rewards polished output, then act shocked when students reach for the most efficient polishing machine ever invented.
Ask students what they’re actually doing with AI, and a different picture emerges.
They’re not just generating term papers. They’re using chatbots to decode assignments teachers didn’t adequately explain, to solve math problems they only half remember, to rewrite clumsy sentences so they don’t sound stupid in front of teachers or professors. They offload confusion and, increasingly, discomfort. AI has become a study buddy, ghostwriter, therapist and social body armor all wrapped up in one.
The real disruption isn’t that students are outsourcing work; it’s that they’re outsourcing nerve.
Image generated using ChatGPT.
A Pew survey last fall found more than half of teens using AI for school — 44% for “some” help, 10% for “most or all” of it. Nearly 60% say classmates cheat with it “very often” or “somewhat often.”
However, the quieter trend is emotional offloading: kids rehearsing tough texts to friends, apologies to professors, even dating openers with bots that tell them they’re “absolutely right” every time.
Just as over-reliance on GPS atrophies spatial awareness, constant AI mediation risks de-skilling the messy human work of reading a room, taking a risk or owning a mistake.
Here’s a dose of reality practically nobody wants to hear: American education wasn’t working even before ChatGPT arrived.
Pre-pandemic, the gold-standard National Assessment of Educational Progress showed fewer than half of 8th graders proficient in reading or math. Post-COVID, those numbers cratered further. Math proficiency is down nine points, while per-pupil spending has hit $15,000-plus nationwide. We’ve poured in money, tests, ed-tech, consultants, yet kids are further behind today than in 2019.
Why? Because our education system isn’t built for learning. It’s built for teaching, and teaching is a business. Cram 25 kids in a room, one instructor at the front, one pace for all. It’s the most cost-effective way to distribute lectures and grades at scale.
Image generated using ChatGPT.
We’ve known for 40 years it’s suboptimal. Psychologist Benjamin Bloom’s landmark 1984 study proved that one-on-one tutoring boosts performance by two standard deviations, turning C students into A students. His verdict: It’s too expensive for most societies. So we standardized the cheap model instead.
That model made sense when human tutors were the only alternative. Now AI changes the math. Generative AI doesn’t threaten this setup. It exposes its flaws.
Half of teens already use chatbots daily for schoolwork because they get faster, less judgmental help at midnight than from an overburdened teacher or a textbook. You can’t ban that any more than you can ban Google or calculators. Students will keep building their own bots, just as they’ve always gamed whatever system we hand them.
The smart move wouldn’t be restricting AI use; it would be harnessing it.
At New York University, where I run the university’s online master’s in journalism program, I’ve built AI tools for my classes that don’t flatter or spoon-feed.
Upload several ethics case studies, configure the chatbot as an aggressive debater, and suddenly students can’t hide behind vague terms like “immoral” or “newsworthy.” The bot demands: “Define it. Defend it. Show me your evidence.” No easy wins, just the kind of rigorous back-and-forth newsrooms used to demand daily.
This isn’t “AI literacy.” It’s AI as a sparring partner, forcing critical thinking where sycophantic commercial bots erode it.
Scale it, and you get: instant writing assessments, scoring structure, quotes, clarity; interactive First Amendment games with retention checks; journalism simulators chasing real scoops. Track progress class-over-class. All dirt cheap compared to the bloat of lectures and multiple-choice drudgery.
Image generated using DALLE.
Colleges and K-12 districts face a stark fork. Keep pretending AI is optional, mandate “blue books” and pens for midterms and finals, watch engagement tank as kids get bespoke help off-platform anyway. Or redesign your curricula: flip class time from info-dumps to judgment drills, debates, live ethics wrestling, relying on human skills no bot can replicate.
Use AI for the rote, the scalable, the 1 a.m. confusion-clearing. Measure what matters: not output polish, but depth, adaptability, nerve.
Education acts like a trillion-dollar business because it is one: $1.5 trillion in K-12 alone. But businesses pivot when technology obsoletes the core model. Bloom’s 2 Sigma dream was always economically impossible. Now, AI makes it inevitable.
We can’t stop kids from using AI for school, but we can redesign school to make the educational experience deeper, more personal and harder to fake.
