AI Chatbots and Self-Regulated Learning: Why Student Use Often Backfires
|
April 27, 2026
When students use ChatGPT, Claude, or Gemini for homework, the surface story is positive: they get unstuck quickly, the work gets done, the grades hold up.
When students use ChatGPT, Claude, or Gemini for homework, the surface story is positive: they get unstuck quickly, the work gets done, the grades hold up. The deeper story is that the cognitive work which builds self-regulated learning, the planning, monitoring, and reflection cycle, is being silently outsourced to the chatbot. Without explicit teaching, students learn where to get an answer rather than how to construct one.
This article explains why unrestricted AI chatbot use undermines self-regulated learning, what cognitive offloading research actually shows, and how to redesign tasks so AI becomes a feedback partner rather than an answer generator.
Key Takeaways
AI is not neutral on learning: Sparrow, Liu and Wegner (2011) showed that knowing information is stored externally weakens memory of the information itself, even when retrieval is easy.
Cognitive offloading bypasses SRL: Risko and Gilbert (2016) documented that the lower the cognitive cost of asking, the less likely students are to attempt the planning and monitoring phases of self-regulated learning.
Position AI as a feedback partner: Tasks designed around "submit your draft, ask the AI to challenge it" preserve the cognitive work; tasks designed around "ask the AI for the answer" do not.
Teach prompt evaluation explicitly: Students need to learn to question AI output the way they would question a peer, including checking sources, looking for weak arguments, and rejecting confident-but-wrong answers.
The Self-Regulated Learning Cycle vs. AI Shortcut
Why "Letting Them Use AI" Doesn't Build Independent Learners
The intuitive case for letting students use AI freely is that they will learn to learn from it. The student who is stuck on a quadratic equation can ask Claude, get a worked solution, and move on. The teacher is freed up. The student is unblocked. Everyone wins.
The case is false because it confuses task completion with learning. Zimmerman (2000) defined self-regulated learning as a cyclical process of forethought (planning), performance (monitoring effort and using strategies), and self-reflection (judging one's own work). When a chatbot completes the performance phase on the student's behalf, the self-regulation cycle is broken. The student does not plan, because they don't need to. They do not monitor effort, because the effort is the AI's. They do not self-reflect, because there is no own work to reflect on.
This matters because self-regulated learning is the strongest predictor of long-term academic success that we have. Hattie (2009) ranked metacognitive strategies in the top 10 of his meta-analyses (effect size d = 0.69). The students who go on to thrive at A-level, university, and in working life are the ones who can plan their own work, notice when they don't understand, and act on that noticing. AI used as an answer generator removes the practice opportunities for exactly this skill.
Classroom example: Year 10 pupil Hassan has a history essay on the causes of the First World War. Without AI, he reads three sources, struggles to integrate them, and writes a flawed but recognisably-his-own argument. With unrestricted ChatGPT access, he produces a polished 800-word essay in twelve minutes. The mark is higher. The cognitive practice of building an argument from sources, the actual skill the essay was meant to develop, did not happen.
The Research on Cognitive Offloading
Cognitive offloading is the use of external aids (calendars, search engines, AI) to reduce the cognitive demand on working memory. Risko and Gilbert (2016) reviewed 40 studies on the topic and found a consistent pattern: when the cost of offloading is low and the cost of in-head processing is high, people offload. They also found that offloaded knowledge is recalled less reliably than knowledge processed internally.
Sparrow, Liu and Wegner (2011) ran a foundational study on what they called the "Google effect": students who were told that information would be stored on a computer remembered the information itself less well, but remembered the location where it was stored more accurately. The brain optimises for the easier task. AI chatbots represent an even more extreme form of cognitive offloading because they do not just store information — they perform the cognitive operations on it (synthesise, summarise, argue) that the student would otherwise have done.
Storm and Stone (2014) showed that the offloading effect is not limited to memory: students who used external aids to solve problems were less able to solve similar problems unaided afterwards. The cognitive structures that would have formed during effortful problem-solving never form.
There is a counter-argument from the "extended mind" literature (Clark and Chalmers, 1998) that external cognitive aids are a legitimate part of thinking, not a corruption of it. The counter-argument has merit when the goal is task completion. It has limits when the goal is the development of the thinking apparatus itself, which is the goal of education.
What Students Actually Do with Chatbots
When students use chatbots for academic work, classroom observation studies (e.g. Walter, 2024; Bearman et al., 2024) consistently report four patterns:
Direct answer extraction: "What is the answer to question 3?" The student copies the answer and moves on.
Polish and submit: The student writes a rough draft, asks the AI to "make this better," and submits the AI's version with light edits.
Outline-and-fill: The student asks the AI for an outline, then fills in the sections, often using the AI for individual paragraphs.
Confirmation-seeking: The student writes their answer, then asks the AI "is this right?" and accepts the AI's verdict without scrutiny.
All four patterns are forms of cognitive offloading. None preserves the SRL cycle. The fourth is particularly concerning because it looks like the student is doing the work — but the metacognitive judgement (am I right?) has been delegated.
Classroom example: Year 12 pupil Olivia uses ChatGPT for confirmation-seeking on her chemistry calculations. Her marks are excellent for two terms. In the mock exam, with no AI, she scores 31%. The deficit is not knowledge of chemistry — it is the absence of any practised judgement about whether her own answers are right.
How to Redesign Tasks So AI Builds Rather Than Bypasses SRL
The fix is not to ban AI. The fix is to design tasks where AI is structurally positioned as a feedback partner rather than a content generator. Three concrete redesigns:
1. The Draft-Then-Challenge Pattern
Students must produce a hand-written or hand-typed first draft before any AI use is permitted. They then submit the draft to the AI with the prompt: "Find three weaknesses in this argument and suggest counter-evidence." The student responds to the AI's challenges in a second draft. The AI never writes content; it only challenges content.
This preserves the planning, performance, and reflection phases of SRL because the student's own thinking generates the input that the AI then critiques.
2. The Source Verification Task
Students are given an AI-generated answer to a question they have not yet attempted. They must verify the answer against three primary sources, identifying any claims the AI made that the sources do not support. The cognitive work is in the verification, not the generation.
This pattern explicitly teaches the most important AI literacy skill: AI confidently produces plausible-but-wrong information, and the student must develop the habit of checking.
3. The Reflection Prompt Library
Students use the AI exclusively for metacognitive reflection prompts: "Ask me three questions that would help me understand this concept more deeply." "What is the most likely misconception a student would have about this topic?" "What question should I be asking myself before I write this paragraph?"
This pattern uses AI to scaffold the SRL cycle rather than to bypass it. The student remains the cognitive agent; the AI plays the role of a Socratic prompt.
Classroom example: Year 9 English teacher Mr Chen replaces the standard "write an essay on Macbeth's ambition" task with a draft-then-challenge variant. Students write 400 words by hand in class, then take their draft home and prompt the AI: "What are three weaknesses in this argument?" In the next lesson, students respond to the AI's critiques in a second draft. Essay quality improves; more importantly, the practice of evaluating one's own argument and responding to critique becomes habitual.
From Answer Generator to Feedback Partner: Redesigning AI Tasks
The SEND Dimension: AI as Cognitive Scaffolding for Executive Function
For neurodivergent learners with executive function challenges (ADHD, autism, working memory difficulties), the case for AI assistance is genuinely stronger. Risko and Gilbert (2016) noted that cognitive offloading is more beneficial when the in-head cost is genuinely too high for the learner. A pupil with severe working memory limitations may need external scaffolds to access the curriculum at all.
The redesign for SEND learners is to use AI as an explicit external system: a checklist generator, a reading plan creator, a "what-step-comes-next" prompt. The AI replaces the cognitive operations the student cannot perform internally, while the curriculum content remains intact. This is consistent with the principle that scaffolding is about making learning processes accessible, not about simplifying content (Wood, Bruner and Ross, 1976).
The risk for SEND learners is the same as for neurotypical learners — that AI does the cognitive work the student needs to be developing — but the threshold for when offloading becomes legitimate scaffolding is genuinely lower.
Common Misconceptions
"My students will fall behind if I restrict AI use." They will fall behind in task completion, not in learning. The marks they earn through AI use do not reflect skill they will retain.
"Detection tools solve the problem." Detection tools are unreliable and treat AI use as cheating rather than as a misuse of a tool. The framing is wrong: students need to learn when AI helps and when it harms, not just whether they will be caught.
"AI literacy is a separate subject." AI literacy is a skill of every subject. The questions "is this output trustworthy?" and "what cognitive work is being delegated?" apply equally to history essays, maths solutions, and science explanations.
"Younger children should be insulated from AI entirely." The evidence base does not support this. The risk for young learners is the same as for older learners — cognitive offloading prevents skill development — but they also need explicit teaching about how to evaluate AI output, which they will encounter outside school regardless.
Limitations and Critiques
Three limitations of the current evidence base are worth flagging.
First, almost all of the cognitive offloading research predates the current generation of large language models. Sparrow et al. (2011) and Risko and Gilbert (2016) studied calendars, search engines, and notebooks, not generative AI. The mechanism is plausibly the same, but the magnitude of the effect for AI specifically is not yet well characterised.
Second, classroom research on AI use is in its infancy. Walter (2024) and Bearman et al. (2024) provide observational data but rarely with controlled comparisons. The conclusions in this article should be read as best current understanding, subject to revision as more rigorous evidence emerges.
Third, there is a legitimate equity concern: students with private tutors have always had access to one-to-one cognitive support that AI now offers more broadly. Restricting AI in schools may disadvantage students whose parents cannot afford tutoring. Selwyn (2024) argues that the answer is to teach AI use well, not to restrict it.
4 Critical Skills for Evaluating AI Output Like a Peer Reviewer
Next Lesson
In your next lesson, run one of the three task redesigns above on a single piece of work. The draft-then-challenge pattern is the easiest place to start. Tell students explicitly that the goal is not to get the right answer but to practise the cognitive skill of challenging their own argument. Compare what you observe in the classroom and in the resulting work to a similar task without the redesign. The difference will tell you whether AI in your classroom is building learners or building dependency.
Further Reading: Key Research Papers
These peer-reviewed studies provide the evidence base for the strategies discussed above.
Learning analytics dashboards: What do students actually ask for?View study ↗ 15 citations
Divjak et al. (2023)
This study examines what students actually want from learning analytics dashboards that track their academic progress. For teachers, understanding student preferences for data visualisation and feedback can help design more effective digital tools that genuinely support self-regulated learning rather than overwhelming students with irrelevant information.
Applying social cognition to feedback chatbots: Enhancing trustworthiness through politenessView study ↗ 15 citations
Brummernhenrich et al. (2025)
Research shows that making AI chatbots more polite increases student trust and engagement with the technology. Teachers should consider how they frame AI interactions in their classrooms, as the perceived trustworthiness of chatbots directly affects whether students will use them effectively for learning support.
The who, why, and how of ai-based chatbots for learning and teaching in higher education: A systematic reviewView study ↗ 34 citations
Ma et al. (2024)
This systematic review analyses who uses AI chatbots in higher education, why they use them, and how they're implemented. It provides teachers with evidence-based insights into effective chatbot integration strategies and highlights potential pitfalls to avoid when incorporating AI tools into their teaching practice.
Human-AI collaborative learning in mixed reality: Examining the cognitive and socio-emotional interactionsView study ↗ 22 citations
Dang et al. (2025)
This study explores how students learn when collaborating with AI in virtual reality environments, examining both cognitive and emotional responses. Teachers can use these findings to better understand how AI tools affect student thinking processes and social interactions in technology-enhanced learning environments.
Beyond the Hype: Towards a Critical Debate About AI Chatbots in Swedish Higher EducationView study ↗ 16 citations
Pargman et al. (2024)
Swedish researchers argue that higher education needs critical discussion about AI chatbots rather than uncritical adoption. This paper encourages teachers to question why they're using AI tools and whether these technologies truly align with educational goals, promoting thoughtful implementation over trend-following.
Free Resource Pack
AI Chatbots & Self-Regulation: A Learning Guide
3 practical resources for teachers and students to navigate AI chatbots without undermining self-regulated learning.
AI in EducationSelf-Regulated LearningMetacognitionStudent ResourceTeacher CPDDigital LiteracyChatGPTLearning StrategiesAcademic Integrity
Download your free bundle
Fill in your details below and we'll send the resource pack straight to your inbox.
✅
Your resource pack is ready
We've also sent a copy to your email. Check your inbox.
About the Author
Paul Main
Founder, Structural Learning · Fellow of the RSA · Fellow of the Chartered College of Teaching
Paul translates cognitive science research into classroom-ready tools used by 400+ schools. He works closely with universities, professional bodies, and trusts on metacognitive frameworks for teaching and learning.