AI for Teachers: A Complete Guide to Classroom AI Tools
The definitive guide to AI tools for teachers. Compare AI assistants for lesson planning, assessment, differentiation and SEND support.


The definitive guide to AI tools for teachers. Compare AI assistants for lesson planning, assessment, differentiation and SEND support.
AI tools can save teachers up to five hours a week on planning, marking and administration, but only when used with clear purpose and professional judgement. This guide covers every major use of AI in the classroom: lesson planning, assessment, differentiation, SEND support, and building school-wide AI literacy. Each section links to deeper guides across the Structural Learning AI cluster, giving you a single starting point for the full picture.

What does the research say? UNESCO's (2023) global survey found 42% of teachers have used AI tools, but only 10% received formal training. The DfE (2024) reports that schools with structured AI policies see higher quality usage than those without. Hattie's (2023) updated meta-analysis ranks AI-supported feedback at d = 0.48, comparable to traditional formative assessment. The EEF cautions that technology alone has limited impact (+4 months) unless combined with effective pedagogy and teacher training.
A 2025 Twinkl survey of 6,500 UK teachers found 60% are already using AI for work purposes, whilst the Center for Democracy and Technology reports that 85% of teachers and 86% of students used AI in the preceding school year. The gap between usage and training is the central challenge: teachers are adopting AI faster than schools can support them. This guide helps bridge that gap by connecting practical classroom applications with the evidence base and linking to detailed resources for each area.

AI excels at pattern recognition, text generation and data analysis, but it cannot replace professional judgement, relationship-building or the nuanced understanding of individual pupils. Drawing this line clearly matters. Teachers who understand what AI does well use it confidently; those who expect it to think like a colleague end up frustrated.
What AI handles well: generating first drafts of lesson plans, creating differentiated worksheets at multiple reading levels, providing instant feedback on factual or mathematical work, summarising pupil performance data across assessments, and automating report-writing templates. A Year 6 teacher might use AI to generate three versions of a reading comprehension task in under a minute. A secondary science teacher might use it to create a bank of exam-style questions aligned to specific AQA specification points.

What AI cannot do: understand why a particular pupil is struggling emotionally, judge whether a creative writing piece genuinely shows progress for that child, build the trust that makes a reluctant learner take risks, or navigate the complex social dynamics of a classroom. Rose Luckin's research at UCL Knowledge Lab (2018) consistently shows that AI enhances teaching when it handles routine cognitive work, freeing the teacher to focus on the relational and adaptive work that only humans do well.
The practical rule: if a task involves pattern-matching or generation from a template, AI will probably help. If it involves judgement about a specific child in a specific context, you remain essential. Keep that distinction in mind as you read through each application below.
No single AI tool does everything well, and vendor marketing rarely tells you the limitations. The table below compares the major tools UK teachers encounter, rated honestly on what matters in a classroom context. This is vendor-neutral: Structural Learning has no commercial relationship with any of these providers.
| Tool | Best For | Limitations | Cost | GDPR Note |
|---|---|---|---|---|
| ChatGPT (OpenAI) | General planning, resource creation, explaining concepts at different levels | Can hallucinate facts; not curriculum-aligned by default | Free tier available; Plus from $20/mo | Do not input pupil names or data |
| Gemini (Google) | Google Workspace integration, summarising documents, research | Less strong on UK curriculum specifics; newer product | Free tier available; Advanced from $19.99/mo | Check school Google admin settings |
| Claude (Anthropic) | Long document analysis, nuanced writing, careful reasoning | Smaller ecosystem; no image generation | Free tier available; Pro from $20/mo | Do not input pupil names or data |
| Diffit | Instant differentiation: creates reading materials at multiple levels from any source | Focused on reading/vocabulary; limited beyond that | Free for teachers | US-based; check data processing |
| SchoolAI | Student-facing AI spaces with teacher controls and monitoring | Requires school-level subscription; setup time | Free tier; Pro from $15/mo per teacher | Designed for classroom use; admin dashboard |
| TeacherMatic | UK curriculum-aligned resource generation (lesson plans, quizzes, rubrics) | Template-driven; less flexible than general AI | Free tier; Premium from £5/mo | UK-based; GDPR compliant |
The general-purpose tools (ChatGPT, Gemini, Claude) are most flexible but require you to write good prompts and verify outputs. The education-specific tools (Diffit, SchoolAI, TeacherMatic) are narrower but require less prompt skill. Most teachers benefit from starting with one general tool and one specialist tool, building confidence before expanding. For detailed prompt strategies that work across all these platforms, see our guide to AI prompts every teacher should know.
The most productive teachers treat AI as a first-draft generator, not a finished-product machine. They prompt specifically, review critically, and adapt based on their professional knowledge of their pupils. The least productive users paste vague requests, accept outputs uncritically, and wonder why the results feel generic. The tool amplifies whatever you bring to it: strong pedagogical knowledge produces strong AI-assisted resources; weak prompts produce weak outputs regardless of which tool you use.
1. Using AI outputs without checking facts. Large language models generate plausible text, not verified truth. Always fact-check dates, statistics, quotations and scientific claims before sharing AI-generated content with pupils. A secondary history teacher in Bristol discovered that ChatGPT confidently attributed a quotation to Churchill that Churchill never said. The AI was not lying; it was pattern-matching based on training data that included the misattribution.
2. Inputting pupil data into AI tools. Names, SEN information, behavioural records and assessment scores must never be entered into any external AI system. Use anonymised references ("Pupil A is working at greater depth in reading") or generic descriptions. This is a UK GDPR requirement, not a suggestion.
3. Expecting AI to understand your class. AI does not know that three pupils in your Year 5 class arrived mid-year with limited English, that your school uses a specific phonics programme, or that your Year 10s have already covered cell biology but not genetics. You must provide this context in every prompt for the output to be useful.
4. Replacing thinking time with generation time. The pedagogical decisions (what to teach, in what order, with what assessment) are the valuable part of planning. AI should handle the production work (formatting, generating variations, creating resources) after you have made those decisions. Outsourcing the thinking to AI produces technically adequate but pedagogically shallow lessons.
5. Trying every tool at once. Tool fatigue is real. Schools that mandate three or four AI tools simultaneously see lower adoption than those that support mastery of one tool before introducing the next. Pick the tool that addresses your biggest time pressure and learn to use it well before expanding.
AI applications vary significantly across subjects, and the teachers getting most value match the tool to their subject's specific demands. A one-size-fits-all approach wastes time. Here is what experienced teachers report works in practice across the major subject areas.
English and Literacy. AI generates differentiated reading comprehension questions from any text in seconds, creates model paragraphs at different GCSE grade boundaries, and produces vocabulary-building activities matched to reading age. A Year 9 English teacher in Leeds uses Claude to generate three model responses to a literature essay question (one at grade 4, one at grade 6, one at grade 8), then uses them as the basis for a class discussion about what makes writing effective. The AI does the generation; the teacher does the pedagogical thinking.
Mathematics. AI creates problem sets that progress through difficulty levels, generates worked examples with step-by-step solutions, and produces diagnostic questions that target specific misconceptions. A primary maths lead uses ChatGPT to create "What's the same? What's different?" comparison tasks for fractions, then adapts them based on her knowledge of which pupils need concrete representations versus abstract notation. For guidance on managing pupil AI use and assessment design, see AI and academic integrity.
Science. AI generates exam-style questions aligned to AQA, Edexcel or OCR specification points, creates revision summaries that emphasise required practicals, and produces differentiated experiment planning sheets. It is particularly useful for generating scaffolded scientific writing frames that support pupils in structuring method, results and conclusion sections using appropriate scientific vocabulary.
Humanities. AI produces source analysis scaffolds for History, creates structured debate preparation materials for RE, and generates case study summaries for Geography. A GCSE History teacher uses Gemini to create "evidence cards" summarising different historical interpretations of the same event, which pupils then evaluate and rank. The AI curates the sources; the pupils develop the critical thinking.
EYFS and KS1. AI generates phonics-based reading materials at specific phases, creates social stories for transitions, and produces visual timetable descriptions. Teachers of younger children find AI most useful for administrative tasks (report comments, parent communication drafts, EHCP evidence summaries) rather than direct pupil-facing content, since young children need human interaction above all else.
AI reduces lesson planning time from hours to minutes, but the quality depends entirely on how specifically you prompt it. Vague requests produce generic plans. Specific prompts that include year group, prior knowledge, curriculum objectives and desired assessment evidence produce plans worth adapting.
A primary teacher planning a Year 4 fractions sequence might prompt: "Create a 5-lesson sequence on equivalent fractions for Year 4, assuming pupils can identify halves and quarters but struggle with the concept of equivalence. Include concrete-pictorial-abstract progression, paired activities, and a formative assessment task for lesson 3." The output will need professional refinement, but the structural work is done in seconds rather than an evening.
Secondary teachers report particular value in using AI to generate starter activities, worked examples at multiple difficulty levels, and homework tasks aligned to specific exam board mark schemes. A 2025 survey by Teacher Tapp found that 47% of teachers who use AI apply it primarily to planning and resource creation. The time saved is real, but the pedagogical decisions, knowing which activities suit your specific class, understanding where misconceptions typically arise, and judging the right level of challenge, remain yours. For a detailed walkthrough of AI-assisted planning with subject-specific examples, see our full guide to AI in lesson planning.
The difference between a useful and useless AI-generated lesson plan almost always comes down to prompt structure. Compare these two approaches:
| Weak Prompt | Strong Prompt |
|---|---|
| "Create a lesson plan about the water cycle for Year 5." | "Create a 60-minute lesson on the water cycle for Year 5. Pupils can name evaporation and condensation but confuse precipitation with condensation. Include a concrete demonstration, paired discussion using sentence stems, and a 6-question exit ticket assessing the distinction between precipitation and condensation. Align to NC KS2 Science: states of matter." |
The strong prompt includes five elements that make the output immediately useful: time constraint, prior knowledge, specific misconception, activity types, and curriculum alignment. Without these, AI produces a generic plan that requires more editing time than it saved. Our detailed guide to effective AI prompts for teachers provides templates for every major subject and key stage.
The DfE's 2025 guidance explicitly supports AI for formative, low-stakes marking, including classroom quizzes, homework feedback and exam-style question generation. Dylan Wiliam's research on formative assessment demonstrates that timely, specific feedback significantly improves achievement, yet traditional marking methods often delay this intervention by days or weeks. AI closes that gap. For a detailed breakdown of which subjects AI can mark reliably and which still need a teacher, see our guide to AI marking and feedback.
Automated marking tools can assess factual recall questions instantly, provide structured feedback on written responses, and track performance patterns across multiple assessments. A maths teacher using AI to mark Year 10 practice papers gets immediate data on which topics need re-teaching before the next lesson. An English teacher can use AI for initial draft feedback on structure and technical accuracy, then focus their own expertise on evaluating argument quality, creative expression and individual progress.
The DfE is clear, however, that AI marking should supplement rather than replace teacher judgement. High-stakes assessments, summative evaluations and any context requiring understanding of individual pupil circumstances require human oversight. King's College London's 2025 guidance on AI-assisted marking reinforces this: AI is a tool to support, not replace, professional activity. Schools running AI marking pilots (using tools like Graide, KEATH and TeacherMatic) report that teachers save 3-5 hours per week on routine marking whilst maintaining assessment quality. For practical strategies on integrating AI into your assessment workflow, see our guide to AI and student assessment.
In practical terms, AI assessment integration works best when teachers define clear boundaries. Use AI for: multiple-choice auto-marking, factual recall testing, first-pass grammar and spelling feedback, generating exam-style questions by topic, and tracking pupil progress across multiple data points. Avoid using AI for: evaluating creative expression, judging the quality of extended arguments, making decisions about pupil grouping or setting, or any assessment that contributes to formal reporting without human review. The Chartered College of Teaching's 2025 AI certification programme reinforces this distinction between formative AI support and summative professional judgement.
Genuine differentiation across a full class of 30 pupils has always been the aspiration; AI makes it practically achievable. Tools like Diffit can take a single source text and instantly produce versions at three or four reading levels. ChatGPT and Claude can generate problem sets that progress from foundational to extended, matched to individual pupils' current working levels. What previously required an evening of preparation now takes minutes. For prompt templates and subject-specific examples, see our full guide to AI differentiation in the classroom.
For pupils with Special Educational Needs and Disabilities, AI offers particular value. The Center for Democracy and Technology (2025) found that 57% of special education teachers in the US already use AI for IEP-related tasks, including identifying learning patterns, summarising progress data and drafting accommodation recommendations. In UK contexts, AI can help SENCOs generate accessible resources (simplified text, visual schedules, social stories), track EHCP targets across multiple data points, and identify early warning signs when a pupil's engagement or attainment begins to drop. The key safeguard is human oversight: AI should inform professional judgement about SEND pupils, never substitute for it.
Richard Mayer's multimedia learning principles (2009) provide the theoretical foundation for using AI to match content format to individual learning preferences. AI can suggest when to incorporate visual representations, audio explanations or kinaesthetic activities based on pupil response patterns. Schools using AI-driven grouping dashboards report more precise intervention targeting and faster identification of pupils requiring additional scaffolding. For comprehensive guidance on AI applications for inclusive education, see our dedicated guides to AI in special education and differentiation strategies.
76% of teachers report receiving no formal AI training, despite 85% already using AI tools in their practice (Center for Democracy and Technology, 2025). This gap between adoption and understanding creates risk: teachers use AI without understanding its limitations, schools lack policies to guide responsible use, and pupils receive inconsistent messages about when and how AI is appropriate.
Building school-wide AI literacy starts with three foundations. First, every teacher needs a working understanding of what generative AI does and does not do, including its tendency to produce plausible-sounding but incorrect information. Our guide to AI literacy for teachers covers the technical foundations accessibly. Second, schools need a clear, practical AI policy that addresses which tools are approved, what data can and cannot be inputted, and how AI use should be acknowledged. The DfE published its first formal guidance in June 2025; for a step-by-step guide to translating this into a workable school document, see creating your school's AI policy.
Third, teachers need practical prompt-writing skills. The difference between a useful and useless AI output almost always comes down to prompt quality. Specificity about year group, prior knowledge, curriculum objectives and desired output format transforms generic responses into genuinely useful resources. Our guide to AI prompts every teacher should know provides subject-specific templates that work across ChatGPT, Gemini and Claude. For teachers ready to integrate AI into their daily workflow, teaching with an AI co-pilot walks through how to use AI as a thinking partner for planning, differentiation and reflection.
The Chartered College of Teaching now offers a free certified assessment for AI literacy, providing a standardised benchmark for staff development planning. Schools can use this to identify which staff members are confident, which need foundational training, and which are ready to become AI champions who support colleagues. Pairing this assessment with a termly review of AI tool usage creates an evidence-based approach to professional development that avoids both complacency and panic.
Pupils also need AI literacy instruction. They should understand that AI generates text through statistical prediction, not understanding; that AI can produce confident-sounding errors; and that using AI to complete assignments without acknowledgement is dishonest. Schools with clear pupil-facing AI guidance report fewer integrity issues and more productive use of AI as a learning tool. The strongest approaches treat AI as a topic within the existing curriculum (discussing it in computing, English and PSHE) rather than as a standalone initiative.
AI-powered engagement analytics can identify disengaged pupils before their performance drops, giving teachers an early warning system that supplements professional intuition. These tools analyse participation patterns, homework completion rates and assessment trends to flag pupils who may need additional support. Schools in Manchester and Birmingham have reported success using such systems to move from reactive to proactive intervention.
AI also supports engagement through adaptive content delivery. When a pupil struggles with a concept, AI can suggest alternative explanations, visual representations or scaffolded practice tasks in real-time. When a pupil masters material quickly, AI can offer extension activities without the teacher needing to prepare them in advance. This responsiveness is especially valuable in mixed-ability classes where the range of need can be substantial.
The evidence, however, comes with a caveat. The EEF's (2024) review of technology interventions reminds us that the tool matters less than how it is used. AI dashboards that generate data without clear pedagogical response add noise, not value. The teachers who benefit most from AI engagement tools are those who build specific routines: checking the dashboard at the start of each day, using it to inform that day's seating plan or questioning strategy, and reviewing weekly trends to adjust medium-term planning. The technology enables faster, more precise decisions, but the decisions remain the teacher's.
The teachers who integrate AI successfully share one trait: they start small and evaluate honestly before expanding. Here is a four-week roadmap based on patterns from schools that have adopted AI effectively.
| Week | Focus | What to Do | Evaluate |
|---|---|---|---|
| 1 | Learn the basics | Sign up for one AI tool (ChatGPT or Gemini free tier). Ask it to explain a topic you teach well. Notice where it is accurate and where it is wrong. | Can you spot inaccuracies? Do you trust it enough to adapt its outputs? |
| 2 | Planning support | Use AI to generate starter activities or homework tasks for one subject. Be specific in your prompts: include year group, topic, prior knowledge. | Did it save time? How much editing did the output need? Was it better or worse than what you would have created? |
| 3 | Assessment support | Use AI to generate quiz questions or provide first-pass feedback on a low-stakes assessment. Compare AI feedback to your own judgement on the same work. | Was the feedback accurate? Would pupils find it useful? Where did it miss the mark? |
| 4 | Reflect and decide | Review your three weeks of use. Identify the one application that saved the most time with acceptable quality. Make it a regular part of your workflow. | What will you continue using? What did you try but reject? Share findings with a colleague. |
This graduated approach prevents the two most common failure modes: trying to do everything at once (leading to overwhelm and abandonment) and using AI for tasks where it adds no value (leading to frustration and scepticism). The teachers who become confident AI users are not the most technically skilled; they are the ones who evaluate honestly and build on what works.
Artificial intelligence in education refers to software that performs tasks usually requiring human intelligence, such as planning lessons or analysing data. It works by recognising patterns in large datasets to generate text, images, or feedback for learners. Teachers use these tools as assistants rather than replacements for professional judgement.
Teachers typically start by using AI for administrative tasks such as drafting emails, creating rubrics, or generating lesson plans. Many then move towards student facing applications like adaptive platforms that adjust work difficulty for different pupils. Effective implementation requires checking every AI output for factual accuracy and ensuring no sensitive pupil data is shared with the platforms.
The primary benefit is significant time saving, as AI can generate a complete lesson sequence or resource bank in minutes. It allows teachers to create multiple versions of the same material for different reading levels, making differentiation much easier to manage. This reduction in workload gives teachers more time to focus on individual pupil support and relationship building.
Research from the Education Endowment Foundation indicates that technology has a positive impact when combined with strong pedagogy. A 2024 DfE report found that schools with clear AI policies see more effective usage than those without. Meta analysis suggests that AI supported feedback can be as effective as traditional formative assessment when used correctly.
A frequent error is over relying on the software and failing to verify the facts it produces, which can lead to misinformation in lesson materials. Another mistake is inputting private pupil names or data into general tools, which violates GDPR guidelines. Teachers should also avoid using AI for complex emotional judgements that require a deep understanding of a child's unique circumstances.
The choice depends on the specific task, with general tools being strong for nuanced writing and document analysis. Specialist platforms are often better for the UK curriculum because they include templates for specific assessment points. Most educators find success by combining one general assistant with one tool designed specifically for classroom use.
Data protection must come first: never input pupil names, SEN data, behavioural records or any personally identifiable information into any AI tool. This is not just good practice; it is a legal requirement under UK GDPR. Even AI tools that claim data is not stored may process it through servers outside the UK. Schools should maintain an approved tools list and require senior leadership sign-off before any new AI tool is introduced. For a comprehensive treatment of the ethical dimensions, see our guide to AI ethics in education.
Beyond data protection, the ethical considerations are real. AI outputs can contain bias, particularly in assessment and grouping recommendations. AI-generated content should always be reviewed for accuracy before pupils see it. Academic integrity policies need updating to address how pupils can and cannot use AI in their work. These are not reasons to avoid AI, but they are reasons to approach it systematically rather than letting adoption happen without governance.
The best starting point is simple: choose one AI tool and one use case. Use ChatGPT to generate starter activities for one subject for a half-term. Evaluate honestly: did it save time? Were the outputs good enough? What did you need to change? Then expand gradually. Essex primary schools have found success adopting one new AI tool per term, allowing thorough evaluation before scaling. The Chartered College of Teaching now offers a free certified assessment for AI literacy, providing a useful benchmark for staff development planning. For a comprehensive overview of ethical frameworks and bias considerations, see our guide to AI in modern education: challenges and opportunities.
For guidance on choosing the right AI platforms for your context, read our independent comparison of AI tools for teachers. And for a practical roadmap to building staff confidence through structured professional development, explore our guide to AI CPD for schools.
These peer-reviewed papers provide the evidence base for AI in education. Each offers practical implications alongside the research findings.
Artificial Intelligence in Education: A Review View study ↗
Zawacki-Richter et al. (2019)
A systematic review of 146 studies mapping AI applications in education. The authors identify four key areas: profiling and prediction, intelligent tutoring, assessment and evaluation, and adaptive systems. Particularly useful for understanding where AI adds genuine value versus where claims outstrip evidence.
The Role and Design of Teacher AI View study ↗
Hwang & Tu (2021)
This paper examines how AI tools should be designed to support rather than supplant teacher decision-making. The authors propose a framework for teacher-AI collaboration that maintains pedagogical agency. Essential reading for school leaders developing AI implementation strategies.
Intelligence Unleashed: An Argument for AI in Education View study ↗
1,200+ citations
Luckin et al. (2016)
Rose Luckin's influential report from UCL argues that AI's greatest contribution to education is not replacing teachers but providing them with better data and more time. The paper outlines practical applications for formative assessment, adaptive learning and early intervention that remain relevant today.
A Critical Review of AI in Education View study ↗
900+ citations
Holmes et al. (2022)
This paper provides a balanced critical assessment of AI in education, examining both the potential benefits and the risks of uncritical adoption. The authors argue for evidence-informed implementation and highlight the importance of teacher involvement in AI tool design. Valuable for countering both hype and fear.
Teacher and AI: A Systematic Review of Research on Classroom Teachers' Use of AI View study ↗
320+ citations
Chen et al. (2022)
A systematic review focused specifically on how classroom teachers use AI, rather than theoretical possibilities. The findings show that planning and assessment are the dominant use cases, with differentiation growing rapidly. The paper identifies training quality and institutional support as the strongest predictors of effective AI adoption.
AI tools can save teachers up to five hours a week on planning, marking and administration, but only when used with clear purpose and professional judgement. This guide covers every major use of AI in the classroom: lesson planning, assessment, differentiation, SEND support, and building school-wide AI literacy. Each section links to deeper guides across the Structural Learning AI cluster, giving you a single starting point for the full picture.

What does the research say? UNESCO's (2023) global survey found 42% of teachers have used AI tools, but only 10% received formal training. The DfE (2024) reports that schools with structured AI policies see higher quality usage than those without. Hattie's (2023) updated meta-analysis ranks AI-supported feedback at d = 0.48, comparable to traditional formative assessment. The EEF cautions that technology alone has limited impact (+4 months) unless combined with effective pedagogy and teacher training.
A 2025 Twinkl survey of 6,500 UK teachers found 60% are already using AI for work purposes, whilst the Center for Democracy and Technology reports that 85% of teachers and 86% of students used AI in the preceding school year. The gap between usage and training is the central challenge: teachers are adopting AI faster than schools can support them. This guide helps bridge that gap by connecting practical classroom applications with the evidence base and linking to detailed resources for each area.

AI excels at pattern recognition, text generation and data analysis, but it cannot replace professional judgement, relationship-building or the nuanced understanding of individual pupils. Drawing this line clearly matters. Teachers who understand what AI does well use it confidently; those who expect it to think like a colleague end up frustrated.
What AI handles well: generating first drafts of lesson plans, creating differentiated worksheets at multiple reading levels, providing instant feedback on factual or mathematical work, summarising pupil performance data across assessments, and automating report-writing templates. A Year 6 teacher might use AI to generate three versions of a reading comprehension task in under a minute. A secondary science teacher might use it to create a bank of exam-style questions aligned to specific AQA specification points.

What AI cannot do: understand why a particular pupil is struggling emotionally, judge whether a creative writing piece genuinely shows progress for that child, build the trust that makes a reluctant learner take risks, or navigate the complex social dynamics of a classroom. Rose Luckin's research at UCL Knowledge Lab (2018) consistently shows that AI enhances teaching when it handles routine cognitive work, freeing the teacher to focus on the relational and adaptive work that only humans do well.
The practical rule: if a task involves pattern-matching or generation from a template, AI will probably help. If it involves judgement about a specific child in a specific context, you remain essential. Keep that distinction in mind as you read through each application below.
No single AI tool does everything well, and vendor marketing rarely tells you the limitations. The table below compares the major tools UK teachers encounter, rated honestly on what matters in a classroom context. This is vendor-neutral: Structural Learning has no commercial relationship with any of these providers.
| Tool | Best For | Limitations | Cost | GDPR Note |
|---|---|---|---|---|
| ChatGPT (OpenAI) | General planning, resource creation, explaining concepts at different levels | Can hallucinate facts; not curriculum-aligned by default | Free tier available; Plus from $20/mo | Do not input pupil names or data |
| Gemini (Google) | Google Workspace integration, summarising documents, research | Less strong on UK curriculum specifics; newer product | Free tier available; Advanced from $19.99/mo | Check school Google admin settings |
| Claude (Anthropic) | Long document analysis, nuanced writing, careful reasoning | Smaller ecosystem; no image generation | Free tier available; Pro from $20/mo | Do not input pupil names or data |
| Diffit | Instant differentiation: creates reading materials at multiple levels from any source | Focused on reading/vocabulary; limited beyond that | Free for teachers | US-based; check data processing |
| SchoolAI | Student-facing AI spaces with teacher controls and monitoring | Requires school-level subscription; setup time | Free tier; Pro from $15/mo per teacher | Designed for classroom use; admin dashboard |
| TeacherMatic | UK curriculum-aligned resource generation (lesson plans, quizzes, rubrics) | Template-driven; less flexible than general AI | Free tier; Premium from £5/mo | UK-based; GDPR compliant |
The general-purpose tools (ChatGPT, Gemini, Claude) are most flexible but require you to write good prompts and verify outputs. The education-specific tools (Diffit, SchoolAI, TeacherMatic) are narrower but require less prompt skill. Most teachers benefit from starting with one general tool and one specialist tool, building confidence before expanding. For detailed prompt strategies that work across all these platforms, see our guide to AI prompts every teacher should know.
The most productive teachers treat AI as a first-draft generator, not a finished-product machine. They prompt specifically, review critically, and adapt based on their professional knowledge of their pupils. The least productive users paste vague requests, accept outputs uncritically, and wonder why the results feel generic. The tool amplifies whatever you bring to it: strong pedagogical knowledge produces strong AI-assisted resources; weak prompts produce weak outputs regardless of which tool you use.
1. Using AI outputs without checking facts. Large language models generate plausible text, not verified truth. Always fact-check dates, statistics, quotations and scientific claims before sharing AI-generated content with pupils. A secondary history teacher in Bristol discovered that ChatGPT confidently attributed a quotation to Churchill that Churchill never said. The AI was not lying; it was pattern-matching based on training data that included the misattribution.
2. Inputting pupil data into AI tools. Names, SEN information, behavioural records and assessment scores must never be entered into any external AI system. Use anonymised references ("Pupil A is working at greater depth in reading") or generic descriptions. This is a UK GDPR requirement, not a suggestion.
3. Expecting AI to understand your class. AI does not know that three pupils in your Year 5 class arrived mid-year with limited English, that your school uses a specific phonics programme, or that your Year 10s have already covered cell biology but not genetics. You must provide this context in every prompt for the output to be useful.
4. Replacing thinking time with generation time. The pedagogical decisions (what to teach, in what order, with what assessment) are the valuable part of planning. AI should handle the production work (formatting, generating variations, creating resources) after you have made those decisions. Outsourcing the thinking to AI produces technically adequate but pedagogically shallow lessons.
5. Trying every tool at once. Tool fatigue is real. Schools that mandate three or four AI tools simultaneously see lower adoption than those that support mastery of one tool before introducing the next. Pick the tool that addresses your biggest time pressure and learn to use it well before expanding.
AI applications vary significantly across subjects, and the teachers getting most value match the tool to their subject's specific demands. A one-size-fits-all approach wastes time. Here is what experienced teachers report works in practice across the major subject areas.
English and Literacy. AI generates differentiated reading comprehension questions from any text in seconds, creates model paragraphs at different GCSE grade boundaries, and produces vocabulary-building activities matched to reading age. A Year 9 English teacher in Leeds uses Claude to generate three model responses to a literature essay question (one at grade 4, one at grade 6, one at grade 8), then uses them as the basis for a class discussion about what makes writing effective. The AI does the generation; the teacher does the pedagogical thinking.
Mathematics. AI creates problem sets that progress through difficulty levels, generates worked examples with step-by-step solutions, and produces diagnostic questions that target specific misconceptions. A primary maths lead uses ChatGPT to create "What's the same? What's different?" comparison tasks for fractions, then adapts them based on her knowledge of which pupils need concrete representations versus abstract notation. For guidance on managing pupil AI use and assessment design, see AI and academic integrity.
Science. AI generates exam-style questions aligned to AQA, Edexcel or OCR specification points, creates revision summaries that emphasise required practicals, and produces differentiated experiment planning sheets. It is particularly useful for generating scaffolded scientific writing frames that support pupils in structuring method, results and conclusion sections using appropriate scientific vocabulary.
Humanities. AI produces source analysis scaffolds for History, creates structured debate preparation materials for RE, and generates case study summaries for Geography. A GCSE History teacher uses Gemini to create "evidence cards" summarising different historical interpretations of the same event, which pupils then evaluate and rank. The AI curates the sources; the pupils develop the critical thinking.
EYFS and KS1. AI generates phonics-based reading materials at specific phases, creates social stories for transitions, and produces visual timetable descriptions. Teachers of younger children find AI most useful for administrative tasks (report comments, parent communication drafts, EHCP evidence summaries) rather than direct pupil-facing content, since young children need human interaction above all else.
AI reduces lesson planning time from hours to minutes, but the quality depends entirely on how specifically you prompt it. Vague requests produce generic plans. Specific prompts that include year group, prior knowledge, curriculum objectives and desired assessment evidence produce plans worth adapting.
A primary teacher planning a Year 4 fractions sequence might prompt: "Create a 5-lesson sequence on equivalent fractions for Year 4, assuming pupils can identify halves and quarters but struggle with the concept of equivalence. Include concrete-pictorial-abstract progression, paired activities, and a formative assessment task for lesson 3." The output will need professional refinement, but the structural work is done in seconds rather than an evening.
Secondary teachers report particular value in using AI to generate starter activities, worked examples at multiple difficulty levels, and homework tasks aligned to specific exam board mark schemes. A 2025 survey by Teacher Tapp found that 47% of teachers who use AI apply it primarily to planning and resource creation. The time saved is real, but the pedagogical decisions, knowing which activities suit your specific class, understanding where misconceptions typically arise, and judging the right level of challenge, remain yours. For a detailed walkthrough of AI-assisted planning with subject-specific examples, see our full guide to AI in lesson planning.
The difference between a useful and useless AI-generated lesson plan almost always comes down to prompt structure. Compare these two approaches:
| Weak Prompt | Strong Prompt |
|---|---|
| "Create a lesson plan about the water cycle for Year 5." | "Create a 60-minute lesson on the water cycle for Year 5. Pupils can name evaporation and condensation but confuse precipitation with condensation. Include a concrete demonstration, paired discussion using sentence stems, and a 6-question exit ticket assessing the distinction between precipitation and condensation. Align to NC KS2 Science: states of matter." |
The strong prompt includes five elements that make the output immediately useful: time constraint, prior knowledge, specific misconception, activity types, and curriculum alignment. Without these, AI produces a generic plan that requires more editing time than it saved. Our detailed guide to effective AI prompts for teachers provides templates for every major subject and key stage.
The DfE's 2025 guidance explicitly supports AI for formative, low-stakes marking, including classroom quizzes, homework feedback and exam-style question generation. Dylan Wiliam's research on formative assessment demonstrates that timely, specific feedback significantly improves achievement, yet traditional marking methods often delay this intervention by days or weeks. AI closes that gap. For a detailed breakdown of which subjects AI can mark reliably and which still need a teacher, see our guide to AI marking and feedback.
Automated marking tools can assess factual recall questions instantly, provide structured feedback on written responses, and track performance patterns across multiple assessments. A maths teacher using AI to mark Year 10 practice papers gets immediate data on which topics need re-teaching before the next lesson. An English teacher can use AI for initial draft feedback on structure and technical accuracy, then focus their own expertise on evaluating argument quality, creative expression and individual progress.
The DfE is clear, however, that AI marking should supplement rather than replace teacher judgement. High-stakes assessments, summative evaluations and any context requiring understanding of individual pupil circumstances require human oversight. King's College London's 2025 guidance on AI-assisted marking reinforces this: AI is a tool to support, not replace, professional activity. Schools running AI marking pilots (using tools like Graide, KEATH and TeacherMatic) report that teachers save 3-5 hours per week on routine marking whilst maintaining assessment quality. For practical strategies on integrating AI into your assessment workflow, see our guide to AI and student assessment.
In practical terms, AI assessment integration works best when teachers define clear boundaries. Use AI for: multiple-choice auto-marking, factual recall testing, first-pass grammar and spelling feedback, generating exam-style questions by topic, and tracking pupil progress across multiple data points. Avoid using AI for: evaluating creative expression, judging the quality of extended arguments, making decisions about pupil grouping or setting, or any assessment that contributes to formal reporting without human review. The Chartered College of Teaching's 2025 AI certification programme reinforces this distinction between formative AI support and summative professional judgement.
Genuine differentiation across a full class of 30 pupils has always been the aspiration; AI makes it practically achievable. Tools like Diffit can take a single source text and instantly produce versions at three or four reading levels. ChatGPT and Claude can generate problem sets that progress from foundational to extended, matched to individual pupils' current working levels. What previously required an evening of preparation now takes minutes. For prompt templates and subject-specific examples, see our full guide to AI differentiation in the classroom.
For pupils with Special Educational Needs and Disabilities, AI offers particular value. The Center for Democracy and Technology (2025) found that 57% of special education teachers in the US already use AI for IEP-related tasks, including identifying learning patterns, summarising progress data and drafting accommodation recommendations. In UK contexts, AI can help SENCOs generate accessible resources (simplified text, visual schedules, social stories), track EHCP targets across multiple data points, and identify early warning signs when a pupil's engagement or attainment begins to drop. The key safeguard is human oversight: AI should inform professional judgement about SEND pupils, never substitute for it.
Richard Mayer's multimedia learning principles (2009) provide the theoretical foundation for using AI to match content format to individual learning preferences. AI can suggest when to incorporate visual representations, audio explanations or kinaesthetic activities based on pupil response patterns. Schools using AI-driven grouping dashboards report more precise intervention targeting and faster identification of pupils requiring additional scaffolding. For comprehensive guidance on AI applications for inclusive education, see our dedicated guides to AI in special education and differentiation strategies.
76% of teachers report receiving no formal AI training, despite 85% already using AI tools in their practice (Center for Democracy and Technology, 2025). This gap between adoption and understanding creates risk: teachers use AI without understanding its limitations, schools lack policies to guide responsible use, and pupils receive inconsistent messages about when and how AI is appropriate.
Building school-wide AI literacy starts with three foundations. First, every teacher needs a working understanding of what generative AI does and does not do, including its tendency to produce plausible-sounding but incorrect information. Our guide to AI literacy for teachers covers the technical foundations accessibly. Second, schools need a clear, practical AI policy that addresses which tools are approved, what data can and cannot be inputted, and how AI use should be acknowledged. The DfE published its first formal guidance in June 2025; for a step-by-step guide to translating this into a workable school document, see creating your school's AI policy.
Third, teachers need practical prompt-writing skills. The difference between a useful and useless AI output almost always comes down to prompt quality. Specificity about year group, prior knowledge, curriculum objectives and desired output format transforms generic responses into genuinely useful resources. Our guide to AI prompts every teacher should know provides subject-specific templates that work across ChatGPT, Gemini and Claude. For teachers ready to integrate AI into their daily workflow, teaching with an AI co-pilot walks through how to use AI as a thinking partner for planning, differentiation and reflection.
The Chartered College of Teaching now offers a free certified assessment for AI literacy, providing a standardised benchmark for staff development planning. Schools can use this to identify which staff members are confident, which need foundational training, and which are ready to become AI champions who support colleagues. Pairing this assessment with a termly review of AI tool usage creates an evidence-based approach to professional development that avoids both complacency and panic.
Pupils also need AI literacy instruction. They should understand that AI generates text through statistical prediction, not understanding; that AI can produce confident-sounding errors; and that using AI to complete assignments without acknowledgement is dishonest. Schools with clear pupil-facing AI guidance report fewer integrity issues and more productive use of AI as a learning tool. The strongest approaches treat AI as a topic within the existing curriculum (discussing it in computing, English and PSHE) rather than as a standalone initiative.
AI-powered engagement analytics can identify disengaged pupils before their performance drops, giving teachers an early warning system that supplements professional intuition. These tools analyse participation patterns, homework completion rates and assessment trends to flag pupils who may need additional support. Schools in Manchester and Birmingham have reported success using such systems to move from reactive to proactive intervention.
AI also supports engagement through adaptive content delivery. When a pupil struggles with a concept, AI can suggest alternative explanations, visual representations or scaffolded practice tasks in real-time. When a pupil masters material quickly, AI can offer extension activities without the teacher needing to prepare them in advance. This responsiveness is especially valuable in mixed-ability classes where the range of need can be substantial.
The evidence, however, comes with a caveat. The EEF's (2024) review of technology interventions reminds us that the tool matters less than how it is used. AI dashboards that generate data without clear pedagogical response add noise, not value. The teachers who benefit most from AI engagement tools are those who build specific routines: checking the dashboard at the start of each day, using it to inform that day's seating plan or questioning strategy, and reviewing weekly trends to adjust medium-term planning. The technology enables faster, more precise decisions, but the decisions remain the teacher's.
The teachers who integrate AI successfully share one trait: they start small and evaluate honestly before expanding. Here is a four-week roadmap based on patterns from schools that have adopted AI effectively.
| Week | Focus | What to Do | Evaluate |
|---|---|---|---|
| 1 | Learn the basics | Sign up for one AI tool (ChatGPT or Gemini free tier). Ask it to explain a topic you teach well. Notice where it is accurate and where it is wrong. | Can you spot inaccuracies? Do you trust it enough to adapt its outputs? |
| 2 | Planning support | Use AI to generate starter activities or homework tasks for one subject. Be specific in your prompts: include year group, topic, prior knowledge. | Did it save time? How much editing did the output need? Was it better or worse than what you would have created? |
| 3 | Assessment support | Use AI to generate quiz questions or provide first-pass feedback on a low-stakes assessment. Compare AI feedback to your own judgement on the same work. | Was the feedback accurate? Would pupils find it useful? Where did it miss the mark? |
| 4 | Reflect and decide | Review your three weeks of use. Identify the one application that saved the most time with acceptable quality. Make it a regular part of your workflow. | What will you continue using? What did you try but reject? Share findings with a colleague. |
This graduated approach prevents the two most common failure modes: trying to do everything at once (leading to overwhelm and abandonment) and using AI for tasks where it adds no value (leading to frustration and scepticism). The teachers who become confident AI users are not the most technically skilled; they are the ones who evaluate honestly and build on what works.
Artificial intelligence in education refers to software that performs tasks usually requiring human intelligence, such as planning lessons or analysing data. It works by recognising patterns in large datasets to generate text, images, or feedback for learners. Teachers use these tools as assistants rather than replacements for professional judgement.
Teachers typically start by using AI for administrative tasks such as drafting emails, creating rubrics, or generating lesson plans. Many then move towards student facing applications like adaptive platforms that adjust work difficulty for different pupils. Effective implementation requires checking every AI output for factual accuracy and ensuring no sensitive pupil data is shared with the platforms.
The primary benefit is significant time saving, as AI can generate a complete lesson sequence or resource bank in minutes. It allows teachers to create multiple versions of the same material for different reading levels, making differentiation much easier to manage. This reduction in workload gives teachers more time to focus on individual pupil support and relationship building.
Research from the Education Endowment Foundation indicates that technology has a positive impact when combined with strong pedagogy. A 2024 DfE report found that schools with clear AI policies see more effective usage than those without. Meta analysis suggests that AI supported feedback can be as effective as traditional formative assessment when used correctly.
A frequent error is over relying on the software and failing to verify the facts it produces, which can lead to misinformation in lesson materials. Another mistake is inputting private pupil names or data into general tools, which violates GDPR guidelines. Teachers should also avoid using AI for complex emotional judgements that require a deep understanding of a child's unique circumstances.
The choice depends on the specific task, with general tools being strong for nuanced writing and document analysis. Specialist platforms are often better for the UK curriculum because they include templates for specific assessment points. Most educators find success by combining one general assistant with one tool designed specifically for classroom use.
Data protection must come first: never input pupil names, SEN data, behavioural records or any personally identifiable information into any AI tool. This is not just good practice; it is a legal requirement under UK GDPR. Even AI tools that claim data is not stored may process it through servers outside the UK. Schools should maintain an approved tools list and require senior leadership sign-off before any new AI tool is introduced. For a comprehensive treatment of the ethical dimensions, see our guide to AI ethics in education.
Beyond data protection, the ethical considerations are real. AI outputs can contain bias, particularly in assessment and grouping recommendations. AI-generated content should always be reviewed for accuracy before pupils see it. Academic integrity policies need updating to address how pupils can and cannot use AI in their work. These are not reasons to avoid AI, but they are reasons to approach it systematically rather than letting adoption happen without governance.
The best starting point is simple: choose one AI tool and one use case. Use ChatGPT to generate starter activities for one subject for a half-term. Evaluate honestly: did it save time? Were the outputs good enough? What did you need to change? Then expand gradually. Essex primary schools have found success adopting one new AI tool per term, allowing thorough evaluation before scaling. The Chartered College of Teaching now offers a free certified assessment for AI literacy, providing a useful benchmark for staff development planning. For a comprehensive overview of ethical frameworks and bias considerations, see our guide to AI in modern education: challenges and opportunities.
For guidance on choosing the right AI platforms for your context, read our independent comparison of AI tools for teachers. And for a practical roadmap to building staff confidence through structured professional development, explore our guide to AI CPD for schools.
These peer-reviewed papers provide the evidence base for AI in education. Each offers practical implications alongside the research findings.
Artificial Intelligence in Education: A Review View study ↗
Zawacki-Richter et al. (2019)
A systematic review of 146 studies mapping AI applications in education. The authors identify four key areas: profiling and prediction, intelligent tutoring, assessment and evaluation, and adaptive systems. Particularly useful for understanding where AI adds genuine value versus where claims outstrip evidence.
The Role and Design of Teacher AI View study ↗
Hwang & Tu (2021)
This paper examines how AI tools should be designed to support rather than supplant teacher decision-making. The authors propose a framework for teacher-AI collaboration that maintains pedagogical agency. Essential reading for school leaders developing AI implementation strategies.
Intelligence Unleashed: An Argument for AI in Education View study ↗
1,200+ citations
Luckin et al. (2016)
Rose Luckin's influential report from UCL argues that AI's greatest contribution to education is not replacing teachers but providing them with better data and more time. The paper outlines practical applications for formative assessment, adaptive learning and early intervention that remain relevant today.
A Critical Review of AI in Education View study ↗
900+ citations
Holmes et al. (2022)
This paper provides a balanced critical assessment of AI in education, examining both the potential benefits and the risks of uncritical adoption. The authors argue for evidence-informed implementation and highlight the importance of teacher involvement in AI tool design. Valuable for countering both hype and fear.
Teacher and AI: A Systematic Review of Research on Classroom Teachers' Use of AI View study ↗
320+ citations
Chen et al. (2022)
A systematic review focused specifically on how classroom teachers use AI, rather than theoretical possibilities. The findings show that planning and assessment are the dominant use cases, with differentiation growing rapidly. The paper identifies training quality and institutional support as the strongest predictors of effective AI adoption.
<script type="application/ld+json">{"@context":"https://schema.org","@graph":[{"@type":"Article","@id":"https://www.structural-learning.com/post/ai-for-teachers#article","headline":"AI for Teachers: A Complete Guide to Classroom AI Tools in 2025","description":"The definitive guide to AI tools for teachers. Compare AI assistants for lesson planning, assessment, differentiation and SEND support.","datePublished":"2024-07-02T10:42:18.491Z","dateModified":"2026-03-02T11:00:22.968Z","author":{"@type":"Person","name":"Paul Main","url":"https://www.structural-learning.com/team/paulmain","jobTitle":"Founder & Educational Consultant"},"publisher":{"@type":"Organization","name":"Structural Learning","url":"https://www.structural-learning.com","logo":{"@type":"ImageObject","url":"https://cdn.prod.website-files.com/5b69a01ba2e409e5d5e055c6/6040bf0426cb415ba2fc7882_newlogoblue.svg"}},"mainEntityOfPage":{"@type":"WebPage","@id":"https://www.structural-learning.com/post/ai-for-teachers"},"image":"https://cdn.prod.website-files.com/5b69a01ba2e409501de055d1/696f515afc05c9f609815935_696f51574f678fef05904f75_ai-for-teachers-infographic.webp","wordCount":4153},{"@type":"BreadcrumbList","@id":"https://www.structural-learning.com/post/ai-for-teachers#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https://www.structural-learning.com/"},{"@type":"ListItem","position":2,"name":"Blog","item":"https://www.structural-learning.com/blog"},{"@type":"ListItem","position":3,"name":"AI for Teachers: A Complete Guide to Classroom AI Tools in 2025","item":"https://www.structural-learning.com/post/ai-for-teachers"}]}]}</script>