The most compelling takeaway is that the gains didn’t come from removing Chromebooks in isolation, but from what their absence made possible: tighter feedback loops, more responsive teaching, cleaner attention, and fewer competing demands on student cognition. Those second-order effects matter more than any tool debate.
I’m struck by how much easier it became to see thinking. Paper, whiteboards, and proximity made misconceptions visible in real time — not buried in dashboards to be reviewed later (or never). That’s a powerful reminder that formative assessment works best when it’s immediate, human, and actionable.
The honesty about tradeoffs is important too. Less tech meant more planning and more teacher energy. This wasn’t a shortcut; it was a recommitment to craft. And the fact that students described paper as “harder” feels like confirmation rather than a warning. Difficulty that produces effort, focus, and stamina is often exactly where learning deepens.
Thanks for writing this - I always appreciate the nuance you put into your posts!
Last night my oldest needed some help with math, and he groaned (groaned!) when I said we were going to use an AI. Claude Code did a great job creating a step-by-step, color-coded comparison between two methods of polynomial division. It was reminiscent of what I would expect a direct instruction Algebra 2 worked example to look like. That helped him understand synthetic division more quickly and effectively than I would have been able to and worked to check his answers. At the end of the day, that setting seems the most promising at using technology to improve learning. It's different than the classroom context, where social motivation and other interactions are available.
Agreed, the type of stuff you're describing is similar to what I often use AI for. It can be great!
Something I'm curious about: my associations around "AI" are rooted in popular culture -- movies, books, etc I read about AI, think I, Robot, Terminator, Her, etc. Those all come with the assumption of some level of intelligence.
What I'm seeing from kids is that their associations with AI are from experience with our current generation of models and seem to mostly associate it with slop. Cheating, unrealistic videos where humans have six fingers, videos with weird physics, etc. It's like AI has been unchained from the idea of intelligence and now just refers to using a computer to generate something that's probably low quality.
I felt a bit conflicted about this result. Completion was definitely lower at the start of the month as I worked through getting the new routines down and refining my system for generating paper-based mixed practice.
That said, it took me years to get my systems for DeltaMath refined to the point they were at in December. DeltaMath completion was way lower too when I first introduced it. All of this is pretty contingent on the systems and ecosystem surrounding the assignments.
Really insightful breakdown of how removing tech opened up space for better pedagogy. The bit about circulating and catching misconceptions in real time versus drowning in dashboards is so spot-on. I've seen similar patterns where the"data collection" actually creates distance from the learning moment. The 45% to 62% completion jump for strugglers is teliing too, especially when paper can't be done later like online assignments.
I appreciate this thoughtful and well-written post. I've recently migrated toward a "weekly packet," or cluster of worksheets and puzzles aligned with our standards for the week. My students, who have been on Chromebooks since kindergarten, love these packets and beg to work on them. They seem to have a greater sense of accomplishment when they've filled out a thing physically, rather than just clicking on a screen.
Thanks! I feel so concerned about students spending lots of time on Chromebooks beginning in kindergarten. That has to have some negative effects on the developing brain...
I have also found myself moving toward less tech in the classroom than I used to use. I feel like back in 2013-2017 it was seen as being on the cutting edge of education innovation to be using all of these tech tools that promised learning gains (IXL, Deltamath, Desmos etc) but over time I started to see that they were not always delivering and generally not worth the cost of disconnection and sometimes disengagement that comes with screens. I see hardly any argument for using them in elementary. As students move toward HS and college there are more compelling use cases. I still find value in my AP stats class with stapplets which are far more efficient than TI’s, and AP classroom practice sets would be very time-consuming to replicate as worksheets.
Yea I’ve never taught AP stats, can’t speak to that, but definitely makes sense. The practice of statistics is very different than it was a few decades ago because of technology, which isn’t quite true for the practice of solving two-step equations. I was in a similar place a decade ago and I thought I was so cool! It’s funny how hard it can be to see whether or not all those changes actually led to more learning.
Curious if any research exists on that topic. I would assume those companies tried to demonstrate results but have impartial studies examined their success rates?
I haven't done tons of poking around but one example is we had some literacy consultants come in and one year they pushed really hard for our English teachers to put kids on i-Ready for at least 60 minutes each week. They kept saying that research showed kids who spent 60+ minutes per week on the program made more progress than other students. I looked up the study and there is indeed a study showing that. It's tough to tell whether it's independent, it's not in a formal journal and I figure it was funded at least in part by the parent company. But at a glance, could a typical school leader tell? I don't think so. That just seems like such a big ask. And you have these highly paid consultants coming around saying teachers should do x, the whole reason you hire consultants is to not have to double check everything they do. The whole thing just feels like a mess. If you google around enough you can find research that supports just about anything.
The most compelling takeaway is that the gains didn’t come from removing Chromebooks in isolation, but from what their absence made possible: tighter feedback loops, more responsive teaching, cleaner attention, and fewer competing demands on student cognition. Those second-order effects matter more than any tool debate.
I’m struck by how much easier it became to see thinking. Paper, whiteboards, and proximity made misconceptions visible in real time — not buried in dashboards to be reviewed later (or never). That’s a powerful reminder that formative assessment works best when it’s immediate, human, and actionable.
The honesty about tradeoffs is important too. Less tech meant more planning and more teacher energy. This wasn’t a shortcut; it was a recommitment to craft. And the fact that students described paper as “harder” feels like confirmation rather than a warning. Difficulty that produces effort, focus, and stamina is often exactly where learning deepens.
"...the gains didn’t come from removing Chromebooks in isolation, but from what their absence made possible..."
Very important point here.
Thanks for writing this - I always appreciate the nuance you put into your posts!
Last night my oldest needed some help with math, and he groaned (groaned!) when I said we were going to use an AI. Claude Code did a great job creating a step-by-step, color-coded comparison between two methods of polynomial division. It was reminiscent of what I would expect a direct instruction Algebra 2 worked example to look like. That helped him understand synthetic division more quickly and effectively than I would have been able to and worked to check his answers. At the end of the day, that setting seems the most promising at using technology to improve learning. It's different than the classroom context, where social motivation and other interactions are available.
Agreed, the type of stuff you're describing is similar to what I often use AI for. It can be great!
Something I'm curious about: my associations around "AI" are rooted in popular culture -- movies, books, etc I read about AI, think I, Robot, Terminator, Her, etc. Those all come with the assumption of some level of intelligence.
What I'm seeing from kids is that their associations with AI are from experience with our current generation of models and seem to mostly associate it with slop. Cheating, unrealistic videos where humans have six fingers, videos with weird physics, etc. It's like AI has been unchained from the idea of intelligence and now just refers to using a computer to generate something that's probably low quality.
The strugglers' average completion went from 45% to 62%? That's a meaningful uptick.
Maybe we should sell Tech Free January as an edtech product! One that can increase struggler productivity....by 38%!
There's a lot of money to be made here!
I felt a bit conflicted about this result. Completion was definitely lower at the start of the month as I worked through getting the new routines down and refining my system for generating paper-based mixed practice.
That said, it took me years to get my systems for DeltaMath refined to the point they were at in December. DeltaMath completion was way lower too when I first introduced it. All of this is pretty contingent on the systems and ecosystem surrounding the assignments.
Really insightful breakdown of how removing tech opened up space for better pedagogy. The bit about circulating and catching misconceptions in real time versus drowning in dashboards is so spot-on. I've seen similar patterns where the"data collection" actually creates distance from the learning moment. The 45% to 62% completion jump for strugglers is teliing too, especially when paper can't be done later like online assignments.
I appreciate this thoughtful and well-written post. I've recently migrated toward a "weekly packet," or cluster of worksheets and puzzles aligned with our standards for the week. My students, who have been on Chromebooks since kindergarten, love these packets and beg to work on them. They seem to have a greater sense of accomplishment when they've filled out a thing physically, rather than just clicking on a screen.
Thanks! I feel so concerned about students spending lots of time on Chromebooks beginning in kindergarten. That has to have some negative effects on the developing brain...
I have also found myself moving toward less tech in the classroom than I used to use. I feel like back in 2013-2017 it was seen as being on the cutting edge of education innovation to be using all of these tech tools that promised learning gains (IXL, Deltamath, Desmos etc) but over time I started to see that they were not always delivering and generally not worth the cost of disconnection and sometimes disengagement that comes with screens. I see hardly any argument for using them in elementary. As students move toward HS and college there are more compelling use cases. I still find value in my AP stats class with stapplets which are far more efficient than TI’s, and AP classroom practice sets would be very time-consuming to replicate as worksheets.
Yea I’ve never taught AP stats, can’t speak to that, but definitely makes sense. The practice of statistics is very different than it was a few decades ago because of technology, which isn’t quite true for the practice of solving two-step equations. I was in a similar place a decade ago and I thought I was so cool! It’s funny how hard it can be to see whether or not all those changes actually led to more learning.
Curious if any research exists on that topic. I would assume those companies tried to demonstrate results but have impartial studies examined their success rates?
I haven't done tons of poking around but one example is we had some literacy consultants come in and one year they pushed really hard for our English teachers to put kids on i-Ready for at least 60 minutes each week. They kept saying that research showed kids who spent 60+ minutes per week on the program made more progress than other students. I looked up the study and there is indeed a study showing that. It's tough to tell whether it's independent, it's not in a formal journal and I figure it was funded at least in part by the parent company. But at a glance, could a typical school leader tell? I don't think so. That just seems like such a big ask. And you have these highly paid consultants coming around saying teachers should do x, the whole reason you hire consultants is to not have to double check everything they do. The whole thing just feels like a mess. If you google around enough you can find research that supports just about anything.