AI in Lesson Planning: Saving Time Without Sacrificing [2026]
AI can draft objectives, generate activities and align content to curriculum standards in minutes. But quality depends on the prompt.
![AI in Lesson Planning: Saving Time Without Sacrificing [2026]](https://cdn.prod.website-files.com/5b69a01ba2e409501de055d1/694e864cc2e52d5302b98ecc_ai-in-lesson-planning-classroom-teaching.webp)

AI can generate a week's worth of lesson plans in minutes, but a vague prompt produces a generic plan that wastes more time editing than it saved. The difference between useful and useless AI planning comes down to prompt specificity: year group, prior knowledge, curriculum objectives, misconceptions and desired assessment evidence. This guide provides worked prompt examples across subjects and key stages, shows how to align AI outputs with Rosenshine's principles, and covers the common mistakes that make teachers abandon AI planning tools prematurely.

What does the research say? DfE pilot data (2024) indicates AI-assisted lesson planning reduces preparation time by 30-40% for experienced teachers. Rosenshine's (2012) principles remain the gold standard for lesson structure, and AI tools are most effective when prompted with these frameworks explicitly. Holmes et al. (2019) note that AI-generated lesson plans often lack the contextual knowledge of individual learners that effective differentiation requires, making teacher review essential.

A 2025 Twinkl survey of 6,500 UK teachers found that planning and resource creation is the most common AI application, with 47% of AI-using teachers applying it primarily to these tasks. The tools work: they generate objectives, sequence activities, create differentiated resources and align content to curriculum standards in minutes rather than hours. But the quality of the output depends almost entirely on the quality of the input. Knowing how to write effective prompts is the single most important skill for AI-assisted planning.
Every effective AI planning prompt contains five elements: context, objective, constraints, differentiation needs and output format. Missing any one of these produces a generic plan that requires more editing than it saves.
| Element | What to Include | Example |
|---|---|---|
| Context | Year group, subject, topic, prior knowledge, known misconceptions | "Year 5 Science. Learners can name the planets but confuse rotation and revolution." |
| Objective | Specific learning outcome with curriculum reference | "By the end of the lesson, learners explain the difference between Earth's rotation and revolution. NC KS2 Science: Earth and space." |
| Constraints | Lesson duration, available resources, activity types required | "60 minutes. Include a concrete demonstration using a torch and globe, paired discussion, and independent written task." |
| Differentiation | Specific learner needs, ability range, SEND considerations | "3 EAL learners need sentence stems. 4 learners working at greater depth need an extension comparing Earth and Mars orbits." |
| Output format | How you want the plan structured | "Format as: starter (5 min), teacher input (15 min), guided practice (15 min), independent task (20 min), plenary (5 min)." |
Combining all five elements into a single prompt produces a plan that typically needs 10-15 minutes of refinement rather than 45 minutes of creation from scratch. The more specific you are about what your learners already know and where they typically struggle, the more useful the AI output becomes. For a comprehensive library of prompt templates across subjects and key stages, see our guide to AI prompts every teacher should know.
Different subjects have different planning demands, and the most effective AI prompts reflect those differences. Here are worked examples that teachers have refined through classroom use.
English (KS3). "Create a 60-minute lesson on persuasive writing for Year 8. Learners can identify persuasive techniques (rhetorical questions, repetition, emotive language) but struggle to use them in their own writing. Include: analysis of a model text with annotations, a scaffolded writing frame for lower attainers, and an assessment task where learners write the opening paragraph of a persuasive letter. Success criteria should reference AQA English Language Paper 2."
Maths (KS2). "Plan a lesson sequence of 3 lessons on adding fractions with different denominators for Year 6. Learners are secure with equivalent fractions but make errors when the denominators share no common factor. Use concrete-pictorial-abstract progression. Include fraction bar manipulatives in lesson 1, bar model representations in lesson 2, and abstract calculation with self-checking in lesson 3. Each lesson should include a 5-question exit ticket."
Science (KS4). "Create a GCSE Biology lesson on natural selection for Year 10 (AQA specification 4.6.1). Learners understand inheritance and variation but confuse natural selection with Lamarckism. Include: a common misconception activity as a starter, a worked example using peppered moths, a data interpretation task using finch beak measurements, and a 6-mark exam-style question with model answer. Provide differentiated sentence starters for the extended writing."
History (KS3). "Plan a 50-minute lesson on the causes of the English Civil War for Year 8. Learners have studied Tudor England but this is their first lesson on the Stuarts. Include: a timeline sorting activity, source analysis of two contemporary documents (one pro-Parliament, one pro-King), paired discussion using 'What might X have thought about this?' prompts, and a written judgement task: 'Was religion or money the main cause?' Provide a writing frame with connectives for lower attainers."
EYFS. "Create a week of continuous provision activities on 'People Who Help Us' for Reception. Focus on Communication and Language ELG. Include: a role-play area setup list (doctor's surgery), 3 adult-led focus activities for groups of 6, and 5 provocations for independent investigation. Each activity should develop vocabulary: stethoscope, prescription, appointment, symptom, diagnosis. Include home learning ideas parents can do in 5 minutes."

AI-generated lesson plans improve significantly when you prompt with Rosenshine's principles explicitly. Without this guidance, AI tends to produce lessons that are activity-rich but structurally weak, missing the review, modelling and guided practice stages that Rosenshine's research identifies as essential.
Try adding this to any planning prompt: "Structure the lesson following Rosenshine's principles: begin with daily review of prior learning (5 min), present new material in small steps with modelling (10 min), provide guided practice with checking for understanding (15 min), then independent practice with monitoring (15 min)." This single addition transforms the output from a list of activities into a cognitively sequenced lesson.
The principle of "scaffolding difficult tasks" is where AI adds particular value. Once AI has generated a lesson plan, you can prompt it to create scaffolded versions of each activity: "Take the independent task and create three versions: one with a fully worked example and sentence starters, one with key vocabulary provided, and one with no scaffolding for learners working at greater depth." This produces differentiated resources in seconds that would take 20 minutes to create manually.
The principle AI handles least well is "obtain a high success rate" because this requires knowledge of your specific learners' current understanding. AI cannot judge whether an activity is pitched at the right level for your class. This is where your professional knowledge remains irreplaceable: reviewing the AI plan against what you know about your learners and adjusting the difficulty, pacing and support accordingly.
AI tools default to US educational standards unless you explicitly specify UK curriculum frameworks, and the difference is significant. A prompt asking for "a Grade 5 math lesson on fractions" will produce content aligned to US Common Core standards, using American terminology and assessment formats. Always specify "National Curriculum KS2" or the relevant exam board and specification number.
For primary planning, reference the National Curriculum programme of study directly: "Plan a Year 3 Science lesson aligned to NC KS2 Working Scientifically: asking relevant questions and using different types of scientific enquiries to answer them." The more precise your curriculum reference, the more accurately AI maps its content to what Ofsted expects to see in your planning.
For GCSE and A-Level, include the exam board and specification point number. "Create a revision lesson for AQA GCSE English Literature Paper 2 Section B: Power and Conflict poetry. Focus on comparing 'Ozymandias' and 'London' using the assessment objectives AO1 (informed personal response), AO2 (language, form and structure) and AO3 (context)." This level of specificity produces resources that align with mark schemes rather than generic poetry analysis.
For EYFS, reference the specific area of learning and the Early Learning Goals: "Design a week of enhanced continuous provision activities developing Communication and Language: Listening, Attention and Understanding ELG. Children are working within the 40-60 month developmental band." EYFS planning has unique requirements around child-initiated learning and adult-led interactions that general AI tools handle poorly without explicit guidance. Including the developmental band is essential for age-appropriate outputs.
Teachers who abandon AI planning tools usually do so because they made one of five predictable mistakes in their first few attempts. Recognising these patterns early saves frustration.
1. Accepting the first output without iteration. AI planning works best as a conversation. If the first response is too generic, do not start over. Instead, follow up: "That's too broad. Narrow the starter activity to focus specifically on the misconception that heavier objects fall faster." Each iteration sharpens the output. Expect 2-3 rounds of refinement for a good plan.
2. Not specifying UK curriculum alignment. AI defaults to US standards unless told otherwise. Always include "National Curriculum KS2" or "AQA GCSE specification 4.1.2" or "EYFS Communication and Language ELG" in your prompt. Without this, you get standards-aligned plans, but aligned to the wrong standards.
3. Forgetting to include prior knowledge. "Create a lesson on fractions for Year 5" produces a generic plan. "Create a lesson on fractions for Year 5 who can identify unit fractions but struggle to compare fractions with different denominators" produces a targeted plan. The prior knowledge statement is the most important sentence in your prompt.
4. Using AI for the wrong tasks. AI is excellent for generating activities, resources, questions and differentiated materials. It is poor at sequencing learning across a half-term in a way that builds understanding progressively, because it lacks knowledge of your school's curriculum mapping and your department's agreed approaches. Use AI for within-lesson planning; keep medium-term planning as a human task.
5. Trying to replace planning thinking with generation. The pedagogical decisions, what to teach next, what misconceptions to address, how to sequence concepts, are the valuable part of planning. AI should handle the production work after you have made those decisions. Outsourcing the thinking produces technically adequate but pedagogically shallow lessons that do not respond to your learners' actual needs.
AI's greatest planning value may be in differentiation, the task that most teachers find most time-consuming and most difficult to do well consistently. Producing three versions of a worksheet at different challenge levels takes 30 minutes manually; AI generates them in under a minute.
For learners with special educational needs, prompt specifically: "Adapt this lesson for a learner with dyslexia in Year 7. Provide: text in a dyslexia-friendly format (short sentences, sans-serif font suggestion, cream background recommendation), key vocabulary pre-taught with visual definitions, and a graphic organiser for structuring written responses instead of lined paper." The output will not be perfect for that specific child, but it provides a starting point that a SENCo or class teacher can refine in minutes rather than creating from scratch.
For EAL learners, AI generates bilingual vocabulary lists, sentence stems at appropriate complexity levels, and visual supports. For gifted and talented learners, it creates extension tasks that deepen rather than merely accelerate: "Create an extension for a Year 5 learner working at greater depth in maths. Rather than harder calculations, design a task that requires the learner to explain why the standard column addition algorithm works, using base-ten blocks as a visual proof."
The practical limit of AI differentiation is the same as its general planning limit: it does not know your specific learners. It can produce resources at different levels, but you must match the right level to the right child. Schools that maintain a simple "learner profile prompt bank" (a paragraph per learner describing their needs, updated termly) find that pasting this into AI prompts produces dramatically more personalised outputs. For comprehensive guidance on AI applications for learners with additional needs, see our guide to AI in special education.
The teachers who sustain AI-assisted planning beyond the first half-term are those who build it into a specific routine rather than using it ad hoc. Here is a weekly pattern that experienced AI-using teachers report works well.
Sunday evening (30 minutes). Review the upcoming week's objectives. For each lesson, write a 3-line planning prompt that includes the five elements (context, objective, constraints, differentiation, format). Paste all prompts into your AI tool. Save the outputs without reading them in detail.
Monday morning (15 minutes). Review the AI-generated plans for Monday and Tuesday. Adjust activities based on your knowledge of the class. Add specific learner names to grouping suggestions. Check that the difficulty level matches your expectations. Print or upload to your planning format.
Wednesday (10 minutes). Review plans for Thursday and Friday. By now you have seen how the week's learning has progressed. Adjust the AI plans based on what actually happened in Monday-Wednesday lessons. This responsive adjustment is the most valuable part of the process, AI gets you the structure; you provide the responsiveness.
This routine typically saves 2-3 hours per week compared to planning from scratch. The time saving comes not from AI writing perfect plans, but from AI providing a solid draft that you refine, rather than staring at a blank page. For a comprehensive overview of AI tools available for planning and other classroom tasks, see our complete guide to AI for teachers.
AI planning tools generate more than lesson structures; they produce the supporting resources that typically consume the most preparation time. Worksheets, assessment materials, display content, parent communication and homework tasks can all be generated from a single well-crafted prompt.
The most efficient approach is to generate the lesson plan first, then immediately follow up with resource requests that reference it: "Based on the lesson plan above, create: (1) a retrieval practice starter with 5 questions covering last week's content on forces, (2) a scaffolded worksheet for the main activity with sentence starters and key vocabulary, (3) an extension task for early finishers, and (4) a 4-question exit ticket assessing the lesson objective." This produces a coherent set of resources aligned to a single lesson in under 2 minutes.
For retrieval practice, AI is particularly powerful. Prompt: "Create a retrieval practice grid for Year 9 History covering the last 4 weeks of content (causes of WW1, the Western Front, the Home Front, the Treaty of Versailles). Include 16 questions: 4 factual recall, 4 explain, 4 source-based, and 4 requiring links between topics. Arrange in a 4x4 grid with increasing difficulty from top-left to bottom-right." The resulting resource would take 20-30 minutes to create manually; AI produces it in seconds.
For further reading on this topic, explore our guide to IBDP syllabus.
For further reading on this topic, explore our guide to Whole Class Reading.
For further reading on this topic, explore our guide to Forest Schools.
Report writing is another high-impact use case. Prompt: "Write a Year 4 end-of-term report comment for a learner who has made good progress in reading (moved from working towards to expected standard), needs to develop inference skills, and participates well in group discussions but rushes independent written work." Anonymised prompts like this produce draft comments that teachers refine with personal knowledge, saving the bulk of report-writing time during busy assessment periods. For a broader overview of how AI fits across all aspects of classroom teaching, see AI for teachers: the complete classroom guide.
One of the most common weaknesses in AI-generated lesson plans is that activities cluster at the lowest levels of cognitive demand. Webb's Depth of Knowledge (DoK) framework provides a practical tool for auditing and correcting this. Webb (1997) proposed four levels: Level 1 (Recall and Reproduction) requires simple retrieval of facts or procedures; Level 2 (Skills and Concepts) requires the application of a skill or understanding of a concept to a familiar context; Level 3 (Strategic Thinking) demands reasoning, justification or planning with multiple possible approaches; Level 4 (Extended Thinking) involves sustained higher-order reasoning, complex investigations or connections across disciplines.
Without explicit instruction, AI defaults to Levels 1 and 2. A vague prompt produces recall questions and simple application tasks. Specifying DoK level directly changes the output. Try: "Generate 3 questions at Webb's DoK Level 3 (Strategic Thinking) for Year 10 history learners on the causes of the First World War. Each question should require learners to construct an argument, justify a claim with evidence, or evaluate competing explanations. Avoid factual recall." The resulting questions are genuinely harder to produce and harder for learners to answer, which is precisely what the EEF's evidence on cognitive challenge recommends. For a deeper exploration of the framework and how to use it across your department, see our full guide to Webb's Depth of Knowledge.
Before using any AI-generated lesson, run these 5 checks grounded in cognitive science. These checks ensure that regardless of how well-written a lesson is, it follows the learning science principles that actually improve student achievement.
Does the starter include retrieval from previous learning? A retrieval starter requires learners to recall prior knowledge without notes or peer support. Examples: memory quiz, low-stakes test, "write down everything you remember about fractions," quick recall game. Red flag: if the starter is primarily new information or exploration, not retrieval. A Year 4 teacher receives an AI-generated maths lesson with this starter: "Today we're learning about fractions. Discuss with a partner what you think a fraction is." She rewrites it: "Quick quiz (2 minutes, no notes): what do you call the number above the line in a fraction? What do you call the number below?"
Is new content introduced in small steps with frequent checking for understanding? Small steps mean one key idea per minute, not bundling multiple concepts. Checking means asking learners to show understanding (thumbs up, mini whiteboards, hand raise) before moving on. Red flag: if the lesson teaches 3-4 new ideas in the first 10 minutes without pausing to check. An AI-generated science lesson introduces the water cycle in one continuous 15-minute explanation, then asks learners to draw it. Rosenshine's evidence shows this will cause cognitive overload. The teacher breaks it into 5 mini-steps: evaporation (check), condensation (check), precipitation (check), collection (check), then guided practice drawing the cycle.
Are there worked examples before independent practice? A worked example shows the teacher doing the task step-by-step while learners watch and take notes. This reduces cognitive load by showing the correct method before learners attempt it. Red flag: if learners move straight from instruction to independent practice without seeing a model. An AI-generated English lesson asks learners to analyse a poem in the first activity (no worked example). The teacher adds a worked example: "I'm going to think aloud as I analyse this opening line. The poet uses 'silver' to describe moonlight. This is a metaphor because… I notice the imagery of metal, which suggests… Here's how I'd write that idea in an analytical sentence." Then learners attempt their own analysis.
Does the lesson progress up Bloom's taxonomy, not stay at Remember? Bloom's has 6 levels: Remember (recall facts), Understand (explain), Apply (use in new context), Analyse (break down), Evaluate (judge), Create (make). A weak lesson stays at "Remember", learners just recall facts. A strong lesson starts at Understand and reaches Apply or Analyse. Red flag: if all activities are recall-based. An AI-generated history lesson: "Learners will recall 5 facts about the Industrial Revolution." This is entirely at the Remember level. The teacher rewrites: "Learners will recall facts (Remember), explain why these changes happened (Understand), then analyse which change had the biggest impact on your local town (Analyse)."
Is there a plenary that requires learners to generate, not just receive? A generative plenary has learners produce something: write a summary, explain to a partner, create a quiz, classify examples. This forces the brain to process deeply, which strengthens memory (Bjork & Bjork, 1992). Red flag: if the plenary is a recap where the teacher explains the learning. An AI-generated PE lesson ends with: "The teacher will summarise what they learned about attacking strategies." This is passive listening. The teacher changes it: "Learners will work in pairs to create their own attacking drill. Each pair demonstrates their drill, and the class identifies which principles they used."
A Year 6 English teacher receives an AI-generated lesson on persuasive writing. She runs the 5 checks:
After applying these checks, the lesson follows evidence-based design principles. The AI draft was well-written, but the checks ensured it would actually improve student learning.
References:
Link: Rosenshine's Principles: A Teacher's Guide
AI lesson planning involves using artificial intelligence tools to generate teaching sequences, resources and assessments based on specific teacher prompts. Instead of writing plans from scratch, teachers provide details about their year group, curriculum objectives and learner needs. The tool then produces a structured draft that the teacher can refine and adapt for their specific classroom context.
An effective prompt must include five key elements to prevent the AI from generating generic content. Teachers should specify the context, clear learning objectives, constraints like time limits, differentiation needs and the desired output format. Providing exact details about prior knowledge and common misconceptions results in much higher quality planning materials.
Recent pilot data from the Department for Education indicates that experienced teachers can reduce their preparation time by 30 to 40 percent when using AI tools effectively. This time reduction comes primarily from automating the initial drafting of lesson structures and the generation of differentiated resources. Teachers must still allocate time to review and adapt these outputs to suit their specific learners.
Research shows that artificial intelligence tools are highly effective at structuring content when prompted with established frameworks like the principles of instruction by Rosenshine. Academic studies also highlight that these generated plans lack the deep contextual knowledge required for precise differentiation. The consensus emphasises that these tools should act as a drafting assistant rather than a replacement for professional teacher judgement.
The most frequent mistake is writing vague prompts that lack specific details about the year group, curriculum standards and learner misconceptions. Teachers also frequently fail to review the output, mistakenly assuming the tool understands the specific social dynamics and prior learning of their class. Trying to automate an entire term of planning at once rather than starting with a single subject is another common error that leads to frustration.
Teachers can instruct AI tools to provide specific scaffolds such as sentence starters, simplified glossaries and alternative reading texts for a planned lesson. By specifying the exact needs of learners with Special Educational Needs and Disabilities in the prompt, the generated resources become much more targeted. The teacher must then use their professional expertise to check that these suggested adaptations are appropriate for the individual learners in their classroom.
AI generates metacognition">metacognitive checkpoints and self-reflection prompts that most teachers would not have time to create manually for every lesson. These activities move beyond factual recall into the territory of learners thinking about their own learning, which the EEF identifies as adding +7 months of progress.
Try prompting: "Add three metacognitive checkpoints to this lesson. At each checkpoint, learners should pause and answer a self-regulation question: (1) 'What strategy am I using and is it working?' (2) 'What is the hardest part of this task and what could I do differently?' (3) 'What would I tell a friend who was stuck on this?'" These prompts develop self-regulated learning habits that transfer across subjects.
AI can also create reflective journal prompts for the end of a lesson or unit: "Generate 5 reflection questions for Year 9 learners completing a unit on coastal geography. Questions should prompt learners to evaluate their own understanding, identify what they found difficult, and plan how they will revise the material." The resulting prompts are typically more varied and thoughtful than a teacher could generate in the 2 minutes available at the end of a lesson.
For inquiry-based learning sequences, AI generates investigation questions, research scaffolds and evaluation frameworks that support learner-led learning whilst maintaining curricular focus. The key is specifying the level of autonomy you want learners to have: "Create a guided inquiry structure where I provide the question but learners choose their methods" produces very different output from "Create an open inquiry where learners generate their own questions."
AI is also well-placed to generate structured talk scaffolds that give oracy activities the same rigour as written tasks. Accountable talk stems ("I agree with X because...", "I want to build on Y's point by...", "The evidence for this is...") reduce the cognitive load of managing both thinking and speaking simultaneously. Debate role cards, Socratic questioning sequences and discussion protocols can all be generated from a single prompt: "Generate four discussion roles for a Year 7 English class debating whether social media helps or harms teenagers. Each role should have a specific responsibility, a set of 3 sentence starters, and a self-evaluation question at the end." The resulting scaffold supports structured dialogue without removing the intellectual demand. For a broader framework connecting oracy to language development and classroom practice, see our guide to the importance of oracy in language development.
For prompt templates, SEND-specific adaptations, and subject-by-subject approaches, see our guide to AI differentiation in the classroom.
Not sure which AI platform works best for planning? Read our independent comparison of AI tools for guidance on selecting the right tool.
These papers provide the evidence base for AI-assisted planning. Each offers practical implications alongside the research findings.
Principles of Instruction: Research-Based Strategies That All Teachers Should Know View study ↗
Rosenshine (2012)
The gold standard for lesson structure. Rosenshine's 10 principles (daily review, small steps, questioning, models, guided practice, checking understanding, high success rate, scaffolding, independent practice, review) provide the framework that AI planning tools should be prompted to follow. Without this structure, AI-generated lessons tend to be activity-rich but cognitively shallow.
Artificial Intelligence in Education: Promises and Implications View study ↗
Holmes, Bialik & Fadel (2019)
A comprehensive examination of how AI can support teacher planning and decision-making. The authors argue that AI's greatest contribution is reducing the cognitive load of administrative tasks, freeing teachers to focus on pedagogical reasoning. Their analysis of AI-generated lesson plans shows consistent quality improvements when teachers provide specific, contextualised prompts.
The Science of Learning: 77 Studies That Every Teacher Needs to Know View study ↗
Bradley-Busch & Jones (2020)
A practical synthesis of cognitive science research relevant to lesson design. The book covers retrieval practice, spaced practice, interleaving and cognitive load theory, all principles that should inform how teachers structure AI planning prompts. The more of these principles you include in your prompts, the more pedagogically sound the AI output becomes.
Teacher and AI: Classroom Teachers' Use of AI View study ↗
320+ citations
Chen et al. (2022)
A systematic review showing that planning and resource creation are the dominant AI use cases among classroom teachers. The study found that prompt quality and institutional support are the strongest predictors of successful adoption. Teachers with access to prompt templates and peer support groups reported higher satisfaction and sustained usage over time.
AI can generate a week's worth of lesson plans in minutes, but a vague prompt produces a generic plan that wastes more time editing than it saved. The difference between useful and useless AI planning comes down to prompt specificity: year group, prior knowledge, curriculum objectives, misconceptions and desired assessment evidence. This guide provides worked prompt examples across subjects and key stages, shows how to align AI outputs with Rosenshine's principles, and covers the common mistakes that make teachers abandon AI planning tools prematurely.

What does the research say? DfE pilot data (2024) indicates AI-assisted lesson planning reduces preparation time by 30-40% for experienced teachers. Rosenshine's (2012) principles remain the gold standard for lesson structure, and AI tools are most effective when prompted with these frameworks explicitly. Holmes et al. (2019) note that AI-generated lesson plans often lack the contextual knowledge of individual learners that effective differentiation requires, making teacher review essential.

A 2025 Twinkl survey of 6,500 UK teachers found that planning and resource creation is the most common AI application, with 47% of AI-using teachers applying it primarily to these tasks. The tools work: they generate objectives, sequence activities, create differentiated resources and align content to curriculum standards in minutes rather than hours. But the quality of the output depends almost entirely on the quality of the input. Knowing how to write effective prompts is the single most important skill for AI-assisted planning.
Every effective AI planning prompt contains five elements: context, objective, constraints, differentiation needs and output format. Missing any one of these produces a generic plan that requires more editing than it saves.
| Element | What to Include | Example |
|---|---|---|
| Context | Year group, subject, topic, prior knowledge, known misconceptions | "Year 5 Science. Learners can name the planets but confuse rotation and revolution." |
| Objective | Specific learning outcome with curriculum reference | "By the end of the lesson, learners explain the difference between Earth's rotation and revolution. NC KS2 Science: Earth and space." |
| Constraints | Lesson duration, available resources, activity types required | "60 minutes. Include a concrete demonstration using a torch and globe, paired discussion, and independent written task." |
| Differentiation | Specific learner needs, ability range, SEND considerations | "3 EAL learners need sentence stems. 4 learners working at greater depth need an extension comparing Earth and Mars orbits." |
| Output format | How you want the plan structured | "Format as: starter (5 min), teacher input (15 min), guided practice (15 min), independent task (20 min), plenary (5 min)." |
Combining all five elements into a single prompt produces a plan that typically needs 10-15 minutes of refinement rather than 45 minutes of creation from scratch. The more specific you are about what your learners already know and where they typically struggle, the more useful the AI output becomes. For a comprehensive library of prompt templates across subjects and key stages, see our guide to AI prompts every teacher should know.
Different subjects have different planning demands, and the most effective AI prompts reflect those differences. Here are worked examples that teachers have refined through classroom use.
English (KS3). "Create a 60-minute lesson on persuasive writing for Year 8. Learners can identify persuasive techniques (rhetorical questions, repetition, emotive language) but struggle to use them in their own writing. Include: analysis of a model text with annotations, a scaffolded writing frame for lower attainers, and an assessment task where learners write the opening paragraph of a persuasive letter. Success criteria should reference AQA English Language Paper 2."
Maths (KS2). "Plan a lesson sequence of 3 lessons on adding fractions with different denominators for Year 6. Learners are secure with equivalent fractions but make errors when the denominators share no common factor. Use concrete-pictorial-abstract progression. Include fraction bar manipulatives in lesson 1, bar model representations in lesson 2, and abstract calculation with self-checking in lesson 3. Each lesson should include a 5-question exit ticket."
Science (KS4). "Create a GCSE Biology lesson on natural selection for Year 10 (AQA specification 4.6.1). Learners understand inheritance and variation but confuse natural selection with Lamarckism. Include: a common misconception activity as a starter, a worked example using peppered moths, a data interpretation task using finch beak measurements, and a 6-mark exam-style question with model answer. Provide differentiated sentence starters for the extended writing."
History (KS3). "Plan a 50-minute lesson on the causes of the English Civil War for Year 8. Learners have studied Tudor England but this is their first lesson on the Stuarts. Include: a timeline sorting activity, source analysis of two contemporary documents (one pro-Parliament, one pro-King), paired discussion using 'What might X have thought about this?' prompts, and a written judgement task: 'Was religion or money the main cause?' Provide a writing frame with connectives for lower attainers."
EYFS. "Create a week of continuous provision activities on 'People Who Help Us' for Reception. Focus on Communication and Language ELG. Include: a role-play area setup list (doctor's surgery), 3 adult-led focus activities for groups of 6, and 5 provocations for independent investigation. Each activity should develop vocabulary: stethoscope, prescription, appointment, symptom, diagnosis. Include home learning ideas parents can do in 5 minutes."

AI-generated lesson plans improve significantly when you prompt with Rosenshine's principles explicitly. Without this guidance, AI tends to produce lessons that are activity-rich but structurally weak, missing the review, modelling and guided practice stages that Rosenshine's research identifies as essential.
Try adding this to any planning prompt: "Structure the lesson following Rosenshine's principles: begin with daily review of prior learning (5 min), present new material in small steps with modelling (10 min), provide guided practice with checking for understanding (15 min), then independent practice with monitoring (15 min)." This single addition transforms the output from a list of activities into a cognitively sequenced lesson.
The principle of "scaffolding difficult tasks" is where AI adds particular value. Once AI has generated a lesson plan, you can prompt it to create scaffolded versions of each activity: "Take the independent task and create three versions: one with a fully worked example and sentence starters, one with key vocabulary provided, and one with no scaffolding for learners working at greater depth." This produces differentiated resources in seconds that would take 20 minutes to create manually.
The principle AI handles least well is "obtain a high success rate" because this requires knowledge of your specific learners' current understanding. AI cannot judge whether an activity is pitched at the right level for your class. This is where your professional knowledge remains irreplaceable: reviewing the AI plan against what you know about your learners and adjusting the difficulty, pacing and support accordingly.
AI tools default to US educational standards unless you explicitly specify UK curriculum frameworks, and the difference is significant. A prompt asking for "a Grade 5 math lesson on fractions" will produce content aligned to US Common Core standards, using American terminology and assessment formats. Always specify "National Curriculum KS2" or the relevant exam board and specification number.
For primary planning, reference the National Curriculum programme of study directly: "Plan a Year 3 Science lesson aligned to NC KS2 Working Scientifically: asking relevant questions and using different types of scientific enquiries to answer them." The more precise your curriculum reference, the more accurately AI maps its content to what Ofsted expects to see in your planning.
For GCSE and A-Level, include the exam board and specification point number. "Create a revision lesson for AQA GCSE English Literature Paper 2 Section B: Power and Conflict poetry. Focus on comparing 'Ozymandias' and 'London' using the assessment objectives AO1 (informed personal response), AO2 (language, form and structure) and AO3 (context)." This level of specificity produces resources that align with mark schemes rather than generic poetry analysis.
For EYFS, reference the specific area of learning and the Early Learning Goals: "Design a week of enhanced continuous provision activities developing Communication and Language: Listening, Attention and Understanding ELG. Children are working within the 40-60 month developmental band." EYFS planning has unique requirements around child-initiated learning and adult-led interactions that general AI tools handle poorly without explicit guidance. Including the developmental band is essential for age-appropriate outputs.
Teachers who abandon AI planning tools usually do so because they made one of five predictable mistakes in their first few attempts. Recognising these patterns early saves frustration.
1. Accepting the first output without iteration. AI planning works best as a conversation. If the first response is too generic, do not start over. Instead, follow up: "That's too broad. Narrow the starter activity to focus specifically on the misconception that heavier objects fall faster." Each iteration sharpens the output. Expect 2-3 rounds of refinement for a good plan.
2. Not specifying UK curriculum alignment. AI defaults to US standards unless told otherwise. Always include "National Curriculum KS2" or "AQA GCSE specification 4.1.2" or "EYFS Communication and Language ELG" in your prompt. Without this, you get standards-aligned plans, but aligned to the wrong standards.
3. Forgetting to include prior knowledge. "Create a lesson on fractions for Year 5" produces a generic plan. "Create a lesson on fractions for Year 5 who can identify unit fractions but struggle to compare fractions with different denominators" produces a targeted plan. The prior knowledge statement is the most important sentence in your prompt.
4. Using AI for the wrong tasks. AI is excellent for generating activities, resources, questions and differentiated materials. It is poor at sequencing learning across a half-term in a way that builds understanding progressively, because it lacks knowledge of your school's curriculum mapping and your department's agreed approaches. Use AI for within-lesson planning; keep medium-term planning as a human task.
5. Trying to replace planning thinking with generation. The pedagogical decisions, what to teach next, what misconceptions to address, how to sequence concepts, are the valuable part of planning. AI should handle the production work after you have made those decisions. Outsourcing the thinking produces technically adequate but pedagogically shallow lessons that do not respond to your learners' actual needs.
AI's greatest planning value may be in differentiation, the task that most teachers find most time-consuming and most difficult to do well consistently. Producing three versions of a worksheet at different challenge levels takes 30 minutes manually; AI generates them in under a minute.
For learners with special educational needs, prompt specifically: "Adapt this lesson for a learner with dyslexia in Year 7. Provide: text in a dyslexia-friendly format (short sentences, sans-serif font suggestion, cream background recommendation), key vocabulary pre-taught with visual definitions, and a graphic organiser for structuring written responses instead of lined paper." The output will not be perfect for that specific child, but it provides a starting point that a SENCo or class teacher can refine in minutes rather than creating from scratch.
For EAL learners, AI generates bilingual vocabulary lists, sentence stems at appropriate complexity levels, and visual supports. For gifted and talented learners, it creates extension tasks that deepen rather than merely accelerate: "Create an extension for a Year 5 learner working at greater depth in maths. Rather than harder calculations, design a task that requires the learner to explain why the standard column addition algorithm works, using base-ten blocks as a visual proof."
The practical limit of AI differentiation is the same as its general planning limit: it does not know your specific learners. It can produce resources at different levels, but you must match the right level to the right child. Schools that maintain a simple "learner profile prompt bank" (a paragraph per learner describing their needs, updated termly) find that pasting this into AI prompts produces dramatically more personalised outputs. For comprehensive guidance on AI applications for learners with additional needs, see our guide to AI in special education.
The teachers who sustain AI-assisted planning beyond the first half-term are those who build it into a specific routine rather than using it ad hoc. Here is a weekly pattern that experienced AI-using teachers report works well.
Sunday evening (30 minutes). Review the upcoming week's objectives. For each lesson, write a 3-line planning prompt that includes the five elements (context, objective, constraints, differentiation, format). Paste all prompts into your AI tool. Save the outputs without reading them in detail.
Monday morning (15 minutes). Review the AI-generated plans for Monday and Tuesday. Adjust activities based on your knowledge of the class. Add specific learner names to grouping suggestions. Check that the difficulty level matches your expectations. Print or upload to your planning format.
Wednesday (10 minutes). Review plans for Thursday and Friday. By now you have seen how the week's learning has progressed. Adjust the AI plans based on what actually happened in Monday-Wednesday lessons. This responsive adjustment is the most valuable part of the process, AI gets you the structure; you provide the responsiveness.
This routine typically saves 2-3 hours per week compared to planning from scratch. The time saving comes not from AI writing perfect plans, but from AI providing a solid draft that you refine, rather than staring at a blank page. For a comprehensive overview of AI tools available for planning and other classroom tasks, see our complete guide to AI for teachers.
AI planning tools generate more than lesson structures; they produce the supporting resources that typically consume the most preparation time. Worksheets, assessment materials, display content, parent communication and homework tasks can all be generated from a single well-crafted prompt.
The most efficient approach is to generate the lesson plan first, then immediately follow up with resource requests that reference it: "Based on the lesson plan above, create: (1) a retrieval practice starter with 5 questions covering last week's content on forces, (2) a scaffolded worksheet for the main activity with sentence starters and key vocabulary, (3) an extension task for early finishers, and (4) a 4-question exit ticket assessing the lesson objective." This produces a coherent set of resources aligned to a single lesson in under 2 minutes.
For retrieval practice, AI is particularly powerful. Prompt: "Create a retrieval practice grid for Year 9 History covering the last 4 weeks of content (causes of WW1, the Western Front, the Home Front, the Treaty of Versailles). Include 16 questions: 4 factual recall, 4 explain, 4 source-based, and 4 requiring links between topics. Arrange in a 4x4 grid with increasing difficulty from top-left to bottom-right." The resulting resource would take 20-30 minutes to create manually; AI produces it in seconds.
For further reading on this topic, explore our guide to IBDP syllabus.
For further reading on this topic, explore our guide to Whole Class Reading.
For further reading on this topic, explore our guide to Forest Schools.
Report writing is another high-impact use case. Prompt: "Write a Year 4 end-of-term report comment for a learner who has made good progress in reading (moved from working towards to expected standard), needs to develop inference skills, and participates well in group discussions but rushes independent written work." Anonymised prompts like this produce draft comments that teachers refine with personal knowledge, saving the bulk of report-writing time during busy assessment periods. For a broader overview of how AI fits across all aspects of classroom teaching, see AI for teachers: the complete classroom guide.
One of the most common weaknesses in AI-generated lesson plans is that activities cluster at the lowest levels of cognitive demand. Webb's Depth of Knowledge (DoK) framework provides a practical tool for auditing and correcting this. Webb (1997) proposed four levels: Level 1 (Recall and Reproduction) requires simple retrieval of facts or procedures; Level 2 (Skills and Concepts) requires the application of a skill or understanding of a concept to a familiar context; Level 3 (Strategic Thinking) demands reasoning, justification or planning with multiple possible approaches; Level 4 (Extended Thinking) involves sustained higher-order reasoning, complex investigations or connections across disciplines.
Without explicit instruction, AI defaults to Levels 1 and 2. A vague prompt produces recall questions and simple application tasks. Specifying DoK level directly changes the output. Try: "Generate 3 questions at Webb's DoK Level 3 (Strategic Thinking) for Year 10 history learners on the causes of the First World War. Each question should require learners to construct an argument, justify a claim with evidence, or evaluate competing explanations. Avoid factual recall." The resulting questions are genuinely harder to produce and harder for learners to answer, which is precisely what the EEF's evidence on cognitive challenge recommends. For a deeper exploration of the framework and how to use it across your department, see our full guide to Webb's Depth of Knowledge.
Before using any AI-generated lesson, run these 5 checks grounded in cognitive science. These checks ensure that regardless of how well-written a lesson is, it follows the learning science principles that actually improve student achievement.
Does the starter include retrieval from previous learning? A retrieval starter requires learners to recall prior knowledge without notes or peer support. Examples: memory quiz, low-stakes test, "write down everything you remember about fractions," quick recall game. Red flag: if the starter is primarily new information or exploration, not retrieval. A Year 4 teacher receives an AI-generated maths lesson with this starter: "Today we're learning about fractions. Discuss with a partner what you think a fraction is." She rewrites it: "Quick quiz (2 minutes, no notes): what do you call the number above the line in a fraction? What do you call the number below?"
Is new content introduced in small steps with frequent checking for understanding? Small steps mean one key idea per minute, not bundling multiple concepts. Checking means asking learners to show understanding (thumbs up, mini whiteboards, hand raise) before moving on. Red flag: if the lesson teaches 3-4 new ideas in the first 10 minutes without pausing to check. An AI-generated science lesson introduces the water cycle in one continuous 15-minute explanation, then asks learners to draw it. Rosenshine's evidence shows this will cause cognitive overload. The teacher breaks it into 5 mini-steps: evaporation (check), condensation (check), precipitation (check), collection (check), then guided practice drawing the cycle.
Are there worked examples before independent practice? A worked example shows the teacher doing the task step-by-step while learners watch and take notes. This reduces cognitive load by showing the correct method before learners attempt it. Red flag: if learners move straight from instruction to independent practice without seeing a model. An AI-generated English lesson asks learners to analyse a poem in the first activity (no worked example). The teacher adds a worked example: "I'm going to think aloud as I analyse this opening line. The poet uses 'silver' to describe moonlight. This is a metaphor because… I notice the imagery of metal, which suggests… Here's how I'd write that idea in an analytical sentence." Then learners attempt their own analysis.
Does the lesson progress up Bloom's taxonomy, not stay at Remember? Bloom's has 6 levels: Remember (recall facts), Understand (explain), Apply (use in new context), Analyse (break down), Evaluate (judge), Create (make). A weak lesson stays at "Remember", learners just recall facts. A strong lesson starts at Understand and reaches Apply or Analyse. Red flag: if all activities are recall-based. An AI-generated history lesson: "Learners will recall 5 facts about the Industrial Revolution." This is entirely at the Remember level. The teacher rewrites: "Learners will recall facts (Remember), explain why these changes happened (Understand), then analyse which change had the biggest impact on your local town (Analyse)."
Is there a plenary that requires learners to generate, not just receive? A generative plenary has learners produce something: write a summary, explain to a partner, create a quiz, classify examples. This forces the brain to process deeply, which strengthens memory (Bjork & Bjork, 1992). Red flag: if the plenary is a recap where the teacher explains the learning. An AI-generated PE lesson ends with: "The teacher will summarise what they learned about attacking strategies." This is passive listening. The teacher changes it: "Learners will work in pairs to create their own attacking drill. Each pair demonstrates their drill, and the class identifies which principles they used."
A Year 6 English teacher receives an AI-generated lesson on persuasive writing. She runs the 5 checks:
After applying these checks, the lesson follows evidence-based design principles. The AI draft was well-written, but the checks ensured it would actually improve student learning.
References:
Link: Rosenshine's Principles: A Teacher's Guide
AI lesson planning involves using artificial intelligence tools to generate teaching sequences, resources and assessments based on specific teacher prompts. Instead of writing plans from scratch, teachers provide details about their year group, curriculum objectives and learner needs. The tool then produces a structured draft that the teacher can refine and adapt for their specific classroom context.
An effective prompt must include five key elements to prevent the AI from generating generic content. Teachers should specify the context, clear learning objectives, constraints like time limits, differentiation needs and the desired output format. Providing exact details about prior knowledge and common misconceptions results in much higher quality planning materials.
Recent pilot data from the Department for Education indicates that experienced teachers can reduce their preparation time by 30 to 40 percent when using AI tools effectively. This time reduction comes primarily from automating the initial drafting of lesson structures and the generation of differentiated resources. Teachers must still allocate time to review and adapt these outputs to suit their specific learners.
Research shows that artificial intelligence tools are highly effective at structuring content when prompted with established frameworks like the principles of instruction by Rosenshine. Academic studies also highlight that these generated plans lack the deep contextual knowledge required for precise differentiation. The consensus emphasises that these tools should act as a drafting assistant rather than a replacement for professional teacher judgement.
The most frequent mistake is writing vague prompts that lack specific details about the year group, curriculum standards and learner misconceptions. Teachers also frequently fail to review the output, mistakenly assuming the tool understands the specific social dynamics and prior learning of their class. Trying to automate an entire term of planning at once rather than starting with a single subject is another common error that leads to frustration.
Teachers can instruct AI tools to provide specific scaffolds such as sentence starters, simplified glossaries and alternative reading texts for a planned lesson. By specifying the exact needs of learners with Special Educational Needs and Disabilities in the prompt, the generated resources become much more targeted. The teacher must then use their professional expertise to check that these suggested adaptations are appropriate for the individual learners in their classroom.
AI generates metacognition">metacognitive checkpoints and self-reflection prompts that most teachers would not have time to create manually for every lesson. These activities move beyond factual recall into the territory of learners thinking about their own learning, which the EEF identifies as adding +7 months of progress.
Try prompting: "Add three metacognitive checkpoints to this lesson. At each checkpoint, learners should pause and answer a self-regulation question: (1) 'What strategy am I using and is it working?' (2) 'What is the hardest part of this task and what could I do differently?' (3) 'What would I tell a friend who was stuck on this?'" These prompts develop self-regulated learning habits that transfer across subjects.
AI can also create reflective journal prompts for the end of a lesson or unit: "Generate 5 reflection questions for Year 9 learners completing a unit on coastal geography. Questions should prompt learners to evaluate their own understanding, identify what they found difficult, and plan how they will revise the material." The resulting prompts are typically more varied and thoughtful than a teacher could generate in the 2 minutes available at the end of a lesson.
For inquiry-based learning sequences, AI generates investigation questions, research scaffolds and evaluation frameworks that support learner-led learning whilst maintaining curricular focus. The key is specifying the level of autonomy you want learners to have: "Create a guided inquiry structure where I provide the question but learners choose their methods" produces very different output from "Create an open inquiry where learners generate their own questions."
AI is also well-placed to generate structured talk scaffolds that give oracy activities the same rigour as written tasks. Accountable talk stems ("I agree with X because...", "I want to build on Y's point by...", "The evidence for this is...") reduce the cognitive load of managing both thinking and speaking simultaneously. Debate role cards, Socratic questioning sequences and discussion protocols can all be generated from a single prompt: "Generate four discussion roles for a Year 7 English class debating whether social media helps or harms teenagers. Each role should have a specific responsibility, a set of 3 sentence starters, and a self-evaluation question at the end." The resulting scaffold supports structured dialogue without removing the intellectual demand. For a broader framework connecting oracy to language development and classroom practice, see our guide to the importance of oracy in language development.
For prompt templates, SEND-specific adaptations, and subject-by-subject approaches, see our guide to AI differentiation in the classroom.
Not sure which AI platform works best for planning? Read our independent comparison of AI tools for guidance on selecting the right tool.
These papers provide the evidence base for AI-assisted planning. Each offers practical implications alongside the research findings.
Principles of Instruction: Research-Based Strategies That All Teachers Should Know View study ↗
Rosenshine (2012)
The gold standard for lesson structure. Rosenshine's 10 principles (daily review, small steps, questioning, models, guided practice, checking understanding, high success rate, scaffolding, independent practice, review) provide the framework that AI planning tools should be prompted to follow. Without this structure, AI-generated lessons tend to be activity-rich but cognitively shallow.
Artificial Intelligence in Education: Promises and Implications View study ↗
Holmes, Bialik & Fadel (2019)
A comprehensive examination of how AI can support teacher planning and decision-making. The authors argue that AI's greatest contribution is reducing the cognitive load of administrative tasks, freeing teachers to focus on pedagogical reasoning. Their analysis of AI-generated lesson plans shows consistent quality improvements when teachers provide specific, contextualised prompts.
The Science of Learning: 77 Studies That Every Teacher Needs to Know View study ↗
Bradley-Busch & Jones (2020)
A practical synthesis of cognitive science research relevant to lesson design. The book covers retrieval practice, spaced practice, interleaving and cognitive load theory, all principles that should inform how teachers structure AI planning prompts. The more of these principles you include in your prompts, the more pedagogically sound the AI output becomes.
Teacher and AI: Classroom Teachers' Use of AI View study ↗
320+ citations
Chen et al. (2022)
A systematic review showing that planning and resource creation are the dominant AI use cases among classroom teachers. The study found that prompt quality and institutional support are the strongest predictors of successful adoption. Teachers with access to prompt templates and peer support groups reported higher satisfaction and sustained usage over time.
{"@context":"https://schema.org","@graph":[{"@type":"Article","@id":"https://www.structural-learning.com/post/ai-in-lesson-planning#article","headline":"AI in Lesson Planning: Saving Time Without Sacrificing","description":"AI can draft objectives, generate activities and align content to curriculum standards in minutes. But quality depends on the prompt. Practical guide to...","datePublished":"2025-03-03T12:00:56.415Z","dateModified":"2026-03-02T11:00:05.528Z","author":{"@type":"Person","name":"Paul Main","url":"https://www.structural-learning.com/team/paulmain","jobTitle":"Founder & Educational Consultant"},"publisher":{"@type":"Organization","name":"Structural Learning","url":"https://www.structural-learning.com","logo":{"@type":"ImageObject","url":"https://cdn.prod.website-files.com/5b69a01ba2e409e5d5e055c6/6040bf0426cb415ba2fc7882_newlogoblue.svg"}},"mainEntityOfPage":{"@type":"WebPage","@id":"https://www.structural-learning.com/post/ai-in-lesson-planning"},"image":"https://cdn.prod.website-files.com/5b69a01ba2e409501de055d1/69807938fbf87f6279c34114_696f46fd2c877f7f1b393f65_696f46fce743077aba11d9fe_ai-in-lesson-planning-infographic.webp","wordCount":3271},{"@type":"BreadcrumbList","@id":"https://www.structural-learning.com/post/ai-in-lesson-planning#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https://www.structural-learning.com/"},{"@type":"ListItem","position":2,"name":"Blog","item":"https://www.structural-learning.com/blog"},{"@type":"ListItem","position":3,"name":"AI in Lesson Planning: Saving Time Without Sacrificing","item":"https://www.structural-learning.com/post/ai-in-lesson-planning"}]}]}