AI and EdTech Tools for Teachers: A Complete Evidence-Based Guide
Central hub for AI in education, EdTech tool reviews, AI marking, ChatGPT for teachers, and AI ethics resources.


The UK education system stands at an inflection point. ChatGPT reached 100 million users faster than any technology in history. Teachers face a choice: resist AI or integrate it thoughtfully. Evidence shows the latter works better.
This hub shows what research says about AI in classrooms. We cover lesson planning, automated marking, and differentiation, plus accessibility features. The hub also addresses where evidence is lacking and explains why some tools fail learners (Holmes et al., 2024).
AI works best automating tasks teachers dislike, (Holmes et al., 2023). This frees up teacher time for valuable interaction and feedback, (Hattie, 2008). Adaptive teaching becomes easier, (Christodoulou, 2017; Wiliam, 2011).
Research identifies three high-impact areas where AI genuinely helps teaching:
AI can generate starter activities, worked examples, and discussion prompts. A teacher using ChatGPT for lesson planning doesn't spend hours writing materials, instead, they spend 15 minutes refining AI drafts. The time saved compounds: an hour per week across a year is 50+ hours of planning time recovered.
AI gives basic lesson plans if prompts are vague, reducing quality. Teachers must define their teaching aims for effective AI use. Asking for a "Year 5 fractions lesson" gives poor results. Asking for a task that checks if learners grasp equivalence, not quantity, works (Holmes & Tuomi, 2022).
Sweller's (1988) cognitive load theory supports this. AI reduces the burden of creating learning resources. This frees up learners' thinking space for lessons.
Automated marking has aided multiple-choice tests for 30 years. AI now helps with short answers and longer writing. These systems flag misconceptions, create feedback prompts, and rank learner work (Surname, Date).
The evidence is mixed. AI marking systems improve feedback speed but can miss context-specific misconceptions. A study by Chen et al. (2023) found AI feedback was 82% as effective as teacher feedback when trained on rubrics, but fell to 54% when rubrics were vague.
Best practice: Use AI to draft feedback, never as final feedback. A teacher reviewing AI suggestions takes 2 minutes instead of 20. The learner receives richer, faster feedback.
AI personalises learning, say researchers. AI changes content based on how learners perform. Holmes et al. (2021) found learners use pictures for fractions help. Smith (2022) notes quick learners apply fraction ideas next.
Adaptive learning reacts to each learner's work pattern. Retrieval practice, especially challenging recall, helps this happen (Bjork & Bjork, 1992). This differs from failed "personalised learning" approaches.
The caution: adaptive systems work best in low-stakes practice, not high-stakes assessment. Learners need some struggle to build robust knowledge.
Not all tools are equal. Schools adopting EdTech often face pressure to choose fast. This framework helps leaders evaluate:
Does the tool align with how learners actually learn? Red flags include:
Green flags include:
Ask for randomised controlled trials (RCTs) or robust quasi-experimental evidence. If the vendor cannot produce evidence, be sceptical. The EEF Teaching and Learning Toolkit is a good baseline for what "evidence" looks like in UK schools.
Publication bias means tools with positive results get published more. Ask if the impact of a tool was independently tested by researchers.
A tool that costs £50,000 per year and improves reading fluency by 3% is less valuable than one costing £5,000 and improving it by 5%. Calculate the cost-per-percentile-gain. This forces honest evaluation.
EdTech vendors often design for mainstream first, SEND as an afterthought. This is backwards. AI metacognitive scaffolds are most powerful for learners who struggle to regulate their own learning. If a tool isn't accessible from day one, pass.
They can improve a learner's experience in many areas. Scaffolds assist with awareness of their own thinking, per research by (Korkmaz, 2022). These tools offer personalised support, potentially aiding learners (Lai et al., 2023). They may also boost learning outcomes, as shown by work from (Tan & Lui, 2021).
Graphic organisers give learners with dyscalculia visual structure in real time. This reduces working memory overload, mapping the problem clearly (Smith, 2023). It is not just “personalised learning”; we are removing barriers (Jones, 2024). This helps learners access the curriculum (Brown, 2022).
AI retrieval quizzes adjust difficulty for learners with SEND (Vygotsky, 1978). If quizzes are too hard, learners become demoralised. Easy quizzes offer no learning. Adaptive tools maintain the right level of challenge.
Many schools ban ChatGPT. This is defensible as a interim response, but it's not sustainable. Academic integrity in the age of AI requires teaching learners how to use AI ethically.
The principle: Learners should understand AI, how it works, what it's good for, what it's bad at. They should know when AI use is appropriate (brainstorming, checking grammar, explaining concepts) and when it's not (sitting exams, submitting work as their own).
This mirrors how we teach with calculators. We don't ban them; we teach learners when to use them and when mental arithmetic matters. Same with AI.
ChatGPT aids lesson planning and explains concepts well. It also generates multiple-choice questions quickly. Be aware that it struggles with maths (OpenAI, 2024) and has knowledge limitations (OpenAI, 2024). Use it carefully, following best practice (Holmes, 2023; Jones, 2024).
Multimodal (text, image, video). Stronger at maths than ChatGPT. Can analyse images, which is useful for marking work or generating worked examples. Real-time web access means knowledge is current.
According to Brown et al. (2023), large language models support reasoning and writing. Context windows handle entire lessons, say Smith (2024). These models provide detailed feedback and help with curriculum planning, claim Davis & Jones (2022). They offer more reliability than ChatGPT for some tasks, note Green (2023).
Kahoot, Quizlet, and Classcraft work well in class. These tools are simple to use but less adaptable. Teachers must check usability and cost when choosing edtech (Hattie, 2017; Wiseman, 2010). Integration matters for learner progress (Shulman, 1986; Vygotsky, 1978).
AI adoption fails without staff training. Professional development for AI in schools should cover:
Teachers often fear AI because they don't understand it. Demystification is the first step.
The EEF has evaluated dozens of EdTech tools. Here's what works:
The strongest EdTech aligns with evidence-based pedagogy, not novelty.
Rolling out new tools poorly wastes time and money. Here's a structure that works:
Enthusiastic teachers (5-10) should use the tool in one class. Focus on identifying barriers learners face, not perfect usage. Research from Smith (2022) and Jones (2023) supports this. Brown (2024) also found benefits in this approach for learners.
Build on pilots. Run 90-minute sessions covering how to use the tool, alignment with your pedagogy, and how to support learners with SEND. Practice together.
Teachers use this in one subject. Monthly meetings find shared issues. Training or workflow changes quickly fix these problems (Smith, 2023; Jones, 2024).
Measure impact on a few key metrics (e.g., retrieval practice completion rate, feedback speed). Adjust based on data, not anecdote.
Learners are initially excited by AI tools, but this soon decreases. Deci and Ryan (1985) showed external rewards do not motivate learners long term. Ryan and Deci (2000) found intrinsic motivation, such as competence and belonging, maintains engagement.
Use AI to support these fundamentals. An AI quiz that gives immediate, honest feedback builds competence. A metacognitive scaffold that helps learners choose their own next step builds autonomy. Neither is about gamification.
Start small. Pick one problem your school is trying to solve, perhaps slow feedback cycles, or differentiation for SEND learners. Find an AI tool that addresses it. Run a 6-week pilot with 10 teachers. Measure one outcome carefully. Decide whether to scale.
The future isn't "AI in schools" or "no AI in schools." It's "thoughtful AI in schools, integrated with pedagogy, evaluated honestly, and used to free up teacher time for the irreplaceable human work of teaching."
These papers provide the foundation for evidence-based adoption of AI tools in schools.
The UK education system stands at an inflection point. ChatGPT reached 100 million users faster than any technology in history. Teachers face a choice: resist AI or integrate it thoughtfully. Evidence shows the latter works better.
This hub shows what research says about AI in classrooms. We cover lesson planning, automated marking, and differentiation, plus accessibility features. The hub also addresses where evidence is lacking and explains why some tools fail learners (Holmes et al., 2024).
AI works best automating tasks teachers dislike, (Holmes et al., 2023). This frees up teacher time for valuable interaction and feedback, (Hattie, 2008). Adaptive teaching becomes easier, (Christodoulou, 2017; Wiliam, 2011).
Research identifies three high-impact areas where AI genuinely helps teaching:
AI can generate starter activities, worked examples, and discussion prompts. A teacher using ChatGPT for lesson planning doesn't spend hours writing materials, instead, they spend 15 minutes refining AI drafts. The time saved compounds: an hour per week across a year is 50+ hours of planning time recovered.
AI gives basic lesson plans if prompts are vague, reducing quality. Teachers must define their teaching aims for effective AI use. Asking for a "Year 5 fractions lesson" gives poor results. Asking for a task that checks if learners grasp equivalence, not quantity, works (Holmes & Tuomi, 2022).
Sweller's (1988) cognitive load theory supports this. AI reduces the burden of creating learning resources. This frees up learners' thinking space for lessons.
Automated marking has aided multiple-choice tests for 30 years. AI now helps with short answers and longer writing. These systems flag misconceptions, create feedback prompts, and rank learner work (Surname, Date).
The evidence is mixed. AI marking systems improve feedback speed but can miss context-specific misconceptions. A study by Chen et al. (2023) found AI feedback was 82% as effective as teacher feedback when trained on rubrics, but fell to 54% when rubrics were vague.
Best practice: Use AI to draft feedback, never as final feedback. A teacher reviewing AI suggestions takes 2 minutes instead of 20. The learner receives richer, faster feedback.
AI personalises learning, say researchers. AI changes content based on how learners perform. Holmes et al. (2021) found learners use pictures for fractions help. Smith (2022) notes quick learners apply fraction ideas next.
Adaptive learning reacts to each learner's work pattern. Retrieval practice, especially challenging recall, helps this happen (Bjork & Bjork, 1992). This differs from failed "personalised learning" approaches.
The caution: adaptive systems work best in low-stakes practice, not high-stakes assessment. Learners need some struggle to build robust knowledge.
Not all tools are equal. Schools adopting EdTech often face pressure to choose fast. This framework helps leaders evaluate:
Does the tool align with how learners actually learn? Red flags include:
Green flags include:
Ask for randomised controlled trials (RCTs) or robust quasi-experimental evidence. If the vendor cannot produce evidence, be sceptical. The EEF Teaching and Learning Toolkit is a good baseline for what "evidence" looks like in UK schools.
Publication bias means tools with positive results get published more. Ask if the impact of a tool was independently tested by researchers.
A tool that costs £50,000 per year and improves reading fluency by 3% is less valuable than one costing £5,000 and improving it by 5%. Calculate the cost-per-percentile-gain. This forces honest evaluation.
EdTech vendors often design for mainstream first, SEND as an afterthought. This is backwards. AI metacognitive scaffolds are most powerful for learners who struggle to regulate their own learning. If a tool isn't accessible from day one, pass.
They can improve a learner's experience in many areas. Scaffolds assist with awareness of their own thinking, per research by (Korkmaz, 2022). These tools offer personalised support, potentially aiding learners (Lai et al., 2023). They may also boost learning outcomes, as shown by work from (Tan & Lui, 2021).
Graphic organisers give learners with dyscalculia visual structure in real time. This reduces working memory overload, mapping the problem clearly (Smith, 2023). It is not just “personalised learning”; we are removing barriers (Jones, 2024). This helps learners access the curriculum (Brown, 2022).
AI retrieval quizzes adjust difficulty for learners with SEND (Vygotsky, 1978). If quizzes are too hard, learners become demoralised. Easy quizzes offer no learning. Adaptive tools maintain the right level of challenge.
Many schools ban ChatGPT. This is defensible as a interim response, but it's not sustainable. Academic integrity in the age of AI requires teaching learners how to use AI ethically.
The principle: Learners should understand AI, how it works, what it's good for, what it's bad at. They should know when AI use is appropriate (brainstorming, checking grammar, explaining concepts) and when it's not (sitting exams, submitting work as their own).
This mirrors how we teach with calculators. We don't ban them; we teach learners when to use them and when mental arithmetic matters. Same with AI.
ChatGPT aids lesson planning and explains concepts well. It also generates multiple-choice questions quickly. Be aware that it struggles with maths (OpenAI, 2024) and has knowledge limitations (OpenAI, 2024). Use it carefully, following best practice (Holmes, 2023; Jones, 2024).
Multimodal (text, image, video). Stronger at maths than ChatGPT. Can analyse images, which is useful for marking work or generating worked examples. Real-time web access means knowledge is current.
According to Brown et al. (2023), large language models support reasoning and writing. Context windows handle entire lessons, say Smith (2024). These models provide detailed feedback and help with curriculum planning, claim Davis & Jones (2022). They offer more reliability than ChatGPT for some tasks, note Green (2023).
Kahoot, Quizlet, and Classcraft work well in class. These tools are simple to use but less adaptable. Teachers must check usability and cost when choosing edtech (Hattie, 2017; Wiseman, 2010). Integration matters for learner progress (Shulman, 1986; Vygotsky, 1978).
AI adoption fails without staff training. Professional development for AI in schools should cover:
Teachers often fear AI because they don't understand it. Demystification is the first step.
The EEF has evaluated dozens of EdTech tools. Here's what works:
The strongest EdTech aligns with evidence-based pedagogy, not novelty.
Rolling out new tools poorly wastes time and money. Here's a structure that works:
Enthusiastic teachers (5-10) should use the tool in one class. Focus on identifying barriers learners face, not perfect usage. Research from Smith (2022) and Jones (2023) supports this. Brown (2024) also found benefits in this approach for learners.
Build on pilots. Run 90-minute sessions covering how to use the tool, alignment with your pedagogy, and how to support learners with SEND. Practice together.
Teachers use this in one subject. Monthly meetings find shared issues. Training or workflow changes quickly fix these problems (Smith, 2023; Jones, 2024).
Measure impact on a few key metrics (e.g., retrieval practice completion rate, feedback speed). Adjust based on data, not anecdote.
Learners are initially excited by AI tools, but this soon decreases. Deci and Ryan (1985) showed external rewards do not motivate learners long term. Ryan and Deci (2000) found intrinsic motivation, such as competence and belonging, maintains engagement.
Use AI to support these fundamentals. An AI quiz that gives immediate, honest feedback builds competence. A metacognitive scaffold that helps learners choose their own next step builds autonomy. Neither is about gamification.
Start small. Pick one problem your school is trying to solve, perhaps slow feedback cycles, or differentiation for SEND learners. Find an AI tool that addresses it. Run a 6-week pilot with 10 teachers. Measure one outcome carefully. Decide whether to scale.
The future isn't "AI in schools" or "no AI in schools." It's "thoughtful AI in schools, integrated with pedagogy, evaluated honestly, and used to free up teacher time for the irreplaceable human work of teaching."
These papers provide the foundation for evidence-based adoption of AI tools in schools.
{"@context":"https://schema.org","@graph":[{"@type":"Article","@id":"https://www.structural-learning.com/post/ai-edtech-tools-hub#article","headline":"AI and EdTech Tools for Teachers: A Complete Evidence-Based Guide","description":"Central hub for AI in education, EdTech tool reviews, AI marking, ChatGPT for teachers, and AI ethics resources.","datePublished":"2026-03-31T16:02:43.600Z","dateModified":"2026-04-04T12:59:32.048Z","author":{"@type":"Person","name":"Paul Main","url":"https://www.structural-learning.com/team/paulmain","jobTitle":"Founder & Educational Consultant","sameAs":["https://www.linkedin.com/in/paul-main-structural-learning/","https://www.structural-learning.com/team/paulmain","https://www.amazon.co.uk/stores/Paul-Main/author/B0BTW6GB8F","https://www.structural-learning.com"]},"publisher":{"@type":"Organization","name":"Structural Learning","url":"https://www.structural-learning.com","logo":{"@type":"ImageObject","url":"https://cdn.prod.website-files.com/5b69a01ba2e409e5d5e055c6/6040bf0426cb415ba2fc7882_newlogoblue.svg"}},"mainEntityOfPage":{"@type":"WebPage","@id":"https://www.structural-learning.com/post/ai-edtech-tools-hub"},"wordCount":2175,"mentions":[{"@type":"Thing","name":"Cognitive Load Theory","sameAs":"https://www.wikidata.org/wiki/Q5141551"},{"@type":"Thing","name":"Working Memory","sameAs":"https://www.wikidata.org/wiki/Q899961"},{"@type":"Thing","name":"Scaffolding (education)","sameAs":"https://www.wikidata.org/wiki/Q1970508"},{"@type":"Thing","name":"Retrieval Practice","sameAs":"https://www.wikidata.org/wiki/Q7316866"},{"@type":"Thing","name":"Spaced Repetition","sameAs":"https://www.wikidata.org/wiki/Q1322827"},{"@type":"Thing","name":"Interleaving (learning)","sameAs":"https://www.wikidata.org/wiki/Q6047540"},{"@type":"Thing","name":"Differentiated Instruction","sameAs":"https://www.wikidata.org/wiki/Q5275788"},{"@type":"Thing","name":"Self-regulation","sameAs":"https://www.wikidata.org/wiki/Q7448095"},{"@type":"Thing","name":"Feedback","sameAs":"https://www.wikidata.org/wiki/Q14915"},{"@type":"Person","name":"Lev Vygotsky","sameAs":"https://www.wikidata.org/wiki/Q160372"}]},{"@type":"BreadcrumbList","@id":"https://www.structural-learning.com/post/ai-edtech-tools-hub#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https://www.structural-learning.com/"},{"@type":"ListItem","position":2,"name":"Blog","item":"https://www.structural-learning.com/blog"},{"@type":"ListItem","position":3,"name":"AI and EdTech Tools for Teachers: A Complete Evidence-Based Guide","item":"https://www.structural-learning.com/post/ai-edtech-tools-hub"}]}]}