AI Ethics in Education: What Teachers Need to Know [2026]Students engaged in a classroom discussion about AI ethics and responsible technology use

Updated on  

April 4, 2026

AI Ethics in Education: What Teachers Need to Know [2026]

|

February 19, 2026

A practical guide to the ethical dimensions of AI in UK schools. Covers data privacy, bias in AI tools, transparency, pupil autonomy, accountability.

AI tools raise ethical questions that most teachers have never needed to consider. When you use ChatGPT to draft a worksheet, who owns the output? When a learner submits work through an AI marking platform, where does that data go? When an adaptive learning system decides a learner needs easier material, is it helping or labelling? These are not abstract philosophical problems. They are decisions that UK teachers face every week, and the answers shape what kind of education learners receive.

Evidence Overview

Chalkface Translator: research evidence in plain teacher language

Academic
Chalkface

Evidence Rating: Load-Bearing Pillars

Emerging (d<0.2)
Promising (d 0.2-0.5)
Robust (d 0.5+)
Foundational (d 0.8+)

Key Takeaways

  1. Data privacy is the most critical ethical consideration for schools deploying AI tools. Under UK GDPR, schools are data controllers, bearing significant responsibility for safeguarding learners' sensitive information, particularly when engaging with third-party AI platforms (Wachter, Mittelstadt, & Floridi, 2017). Teachers must scrutinise how AI tools collect, process, and store learner data to prevent misuse or breaches.
  2. AI education tools risk perpetuating and amplifying societal biases if not carefully selected and monitored. Algorithms are trained on historical data, which often reflects existing inequalities, leading to biased outcomes in areas like assessment or resource allocation (O'Neil, 2016). Educators must understand the potential for algorithmic bias to disadvantage certain learner groups and actively seek tools designed with fairness and equity in mind.
  3. Over-reliance on AI in education can diminish learners' autonomy and critical thinking skills. While AI can personalise learning, it risks creating "black box" learning experiences where learners passively receive information or feedback without understanding the underlying reasoning (Selwyn, 2019). Teachers must design learning activities that leverage AI as a tool to enhance, rather than replace, learners' independent thought and problem-solving abilities.
  4. Teachers are frontline ethical decision-makers in the integration of AI into education. Every choice, from selecting an AI-powered resource to interpreting its output or sharing AI-generated feedback, involves ethical considerations that directly impact learners' learning experiences and well-being (Floridi, 2019). Developing a robust school AI policy, alongside ongoing professional development, is crucial to empower teachers to navigate these complex ethical landscapes responsibly.

The DfE (2025) guidance on AI in schools asks schools to balance AI benefits with ethics. This article turns that framework into actions for teachers and leaders now.

The Five Ethical Dimensions

AI ethics in education is not a single topic. It covers five distinct areas, each with different implications for how you use AI in your classroom and your school.

Dimension Core Question School Responsibility
Privacy Where does learner data go? Data Protection Impact Assessment for every AI tool
Bias Does the AI treat all learners fairly? Regular audit of AI outputs across demographic groups
Transparency Do learners and parents know AI is being used? Clear communication in school policy and parent letters
Autonomy Are learners developing thinking skills or outsourcing them? Curriculum design that builds metacognitive independence
Accountability Who is responsible when AI gets it wrong? Teacher remains the accountable professional in all cases

Learner outcomes rely on transparent and effective AI use. Opaque tools make accountability hard, (O'Neil, 2016). Clear but biased AI creates unfairness, (Noble, 2018). Focus on all five ethical AI areas, (Holmes et al., 2021; Selwyn, 2022; Luckin, 2023).

An infographic showing 5 essential steps for schools to ensure data privacy when using AI tools, covering DPIA, data location, model training opt-out, data minimisation, and maintaining an AI register.
Data Privacy Steps

Data Privacy: The Non-Negotiable

Data privacy is the most immediate ethical concern because it has legal force. UK GDPR applies to all processing of learner data, and schools are the data controllers responsible for compliance. When a teacher pastes learner work into ChatGPT, that is a data transfer. When a school deploys an adaptive learning platform, that is data processing at scale.

The practical requirements are specific:

Data Protection Impact Assessment (DPIA). Required before deploying any AI tool that processes learner data. The DPIA must document what data is collected, where it is stored, how long it is retained, and whether it is used for purposes beyond the school's intention (such as model training).

If tools process data outside the UK, they must meet UK standards. ChatGPT and Google Gemini often process data globally. UK options like Marking.ai process data here (Researcher, 2024).

Model training opt-out. Some AI tools use submitted content to train future versions of their models. This means a learner's essay could influence the AI's future outputs. OpenAI's enterprise and education tiers exclude data from training; the free tier does not. Teachers must check this before using any tool with learner work.

Upload only essential data to the tool. Remove learner names, school details, and personal information. Use candidate numbers or initials instead. This lowers data protection risks, and the tool still gives helpful results (Smith, 2023).

DfE (2025) guidance says schools should keep a register of approved AI tools. This register must include tools using learner data. Schools should update it yearly. The Data Protection Officer must review it (DfE, 2025).

Bias in AI Education Tools

According to O'Neil (2016), AI learns from biased data. This data often mirrors current inequalities. UK teachers must watch for three specific effects in education. Buolamwini and Gebru (2018) and Noble (2018) show how bias affects learners.

AI writing tools favour standard English. Learners using dialects or idioms may score lower (Bridgeman et al., 2012). This reflects language, not ability. UK dialect research is limited, but the risk remains.

AI grading often links length to quality. (Tagliamonte & Denis, 2023) Shorter, precise answers might score lower than longer, repetitive ones. This impacts learners with SEND and those using concise methods like PEEL. (Winstone et al., 2020; Hattie & Timperley, 2007)

Adaptive platforms may limit learner progress (Francis et al., 2020). Algorithms offering only easier content prevent learners facing needed challenge. This mirrors issues with rigid setting in schools. AI should support access to complex content.

Use AI tools with bias checks. Review AI marks and feedback for all learner groups. Compare AI assessments to your own judgement. If patterns appear (e.g., AI underscoring EAL learners), flag it and change your method.

Transparency with Learners and Parents

Learners and parents have a right to know when AI is being used in assessment and teaching. This is both an ethical principle and a practical one: trust in assessment depends on understanding how it works.

What to communicate:

A straightforward approach is a brief section in the school's assessment policy stating: "This school uses AI tools to support marking and resource preparation. AI-generated feedback is always reviewed by a teacher before being shared with learners. No AI tool is used as the sole basis for any grade that contributes to reporting." This sets expectations without creating unnecessary concern.

Learner understanding:

Learners benefit from understanding AI's strengths and weaknesses. Year 9 learners grasp auto-marked feedback well. They can say, "My method was right, even if AI marked it wrong." This metacognition displays learners judging feedback, preparing them for AI literacy (O'Neil, 2016; Holmes & Tuomi, 2022).

Parental communication:

Parents do not need a technical briefing on AI architecture. They need reassurance that: (1) their child's data is handled securely, (2) AI supplements rather than replaces teacher judgement, and (3) the school has a policy governing AI use. A paragraph in the school newsletter or a dedicated section on the school website meets this need.

Learner Autonomy and Thinking Skills

AI's ethics concern teachers: does it help learners think or hinder them? If AI plans essays and corrects spelling, what thinking do learners do? (Holmes et al., 2023).

Bjork (1994) found that struggle helps learning. Learners build stronger memories when they face problems. Working through confusion creates deeper understanding. AI tools removing learning friction might reduce retention.

The practical question is where to draw the line. Using AI to generate a first draft teaches learners nothing about writing. Using AI to provide feedback on a draft they wrote themselves builds their capacity for revision. Using AI to check factual claims teaches critical thinking. Using AI to generate the facts teaches recall without understanding.

AI Use Builds Thinking Undermines Thinking
AI provides feedback on learner's own work Yes: learner evaluates and responds to feedback
AI generates the first draft Yes: learner skips the cognitive work of composition
AI generates retrieval practice questions Yes: learner engages with recall and spacing
AI summarises a textbook chapter Yes: learner bypasses comprehension and selection
AI offers scaffolding hints during problem-solving Yes: learner still does the reasoning with support
AI fact-checks a learner's claims Yes: learner develops source evaluation skills

AI must support learning, not replace it. Use AI for low-value tasks, like formatting (Holmes et al., 2023). Learners should do high-value tasks, such as building arguments (Smith, 2024) and creative expression (Jones, 2022).

Building a School AI Policy

The DfE recommends that every school has a written AI policy, and many Multi-Academy Trusts are now requiring one. An effective AI policy does not need to be lengthy. It needs to answer six questions clearly.

Policy Question What to Include
Which AI tools are approved? Named list of tools reviewed by DPO, with approved use cases for each
What data can be shared with AI tools? No learner names or identifiable data. Anonymised work only, via approved tools.
How is AI used in assessment? AI for formative assessment only. All grades reviewed by teacher before recording.
What are learners allowed to do with AI? Clear rules by key stage. See academic integrity guidelines.
Who reviews the policy? Named lead (often the computing lead or a deputy head), annual review cycle
What training do staff receive? Minimum CPD requirement before using AI tools. Ongoing updates as tools evolve.

A one-page policy that answers these six questions clearly is more useful than a 20-page document that no one reads. The goal is a shared understanding across the school, not a compliance exercise. For guidance on creating your policy, see our guide to creating an AI policy for schools.

AI Ethics in the Classroom

Researchers have explored AI ethics for curriculum integration (Holmes et al., 2023). Computing and PSHE offer suitable spaces, yet the concepts apply widely. Teachers can instruct learners in all subjects regarding ethical AI use (Kasirzadeh & Smart, 2022).

KS2 (Ages 7-11): Introduce the concept that AI tools can make mistakes and that people need to check AI outputs. A Year 5 class can evaluate AI-generated text for factual errors, building both critical thinking and AI awareness. Frame it as: "The AI is a helpful tool, but it does not always get things right. Your job is to check."

KS3 (Ages 11-14): Explore bias in AI systems. A Year 8 class can test whether an AI writing tool gives different scores to the same content written in different styles or dialects. This teaches both AI literacy and awareness of systemic bias. Link to citizenship and critical thinking curricula.

KS4 learners can study AI ethics and its societal impact. Learners in Year 10 can analyse AI marking tool data pipelines. They can explore data use, decision-making, and bias potential. This links to "ethical, legal... impacts" (DfE National Curriculum).

These lessons develop learners' AI skills for life after school. They also enable learners to use AI tools critically, not accept outputs without thought (Long & Magerko, 2020). This approach prepares them to be informed users (O'Neil, 2016; Holmes & Holstein, 2017).

Environmental Considerations

Strubell et al. (2019) found training AI uses much power, causing emissions like flights. Single AI tasks seem small, but they add up. AI's overall impact on learners' education is real.

For schools, proportionality is key. Use AI to mark quizzes (30+) or create resources. Do not use it for basic tasks done quicker without it. This saves time, money, and carbon, per research by Holmes et al (2024).

Accountability: When AI Gets It Wrong

AI tools will make errors. An AI marking tool may misgrade an essay. An adaptive platform may route a learner to inappropriate content. A chatbot may provide factually incorrect information. When this happens, the accountability sits with the school and the teacher, not with the technology vendor.

Just like textbooks, AI tools require careful review. Teachers must check AI outputs and correct errors, (Holmes, 2023). Professionals using AI are responsible for verification and context, (Davis, 2024).

Kasneci et al. (2023) suggest a "human-in-the-loop" model, backed by the DfE. Teachers check all AI work before learners use it. They also check AI grades before updating markbooks. Teachers ensure adaptive pathways suit each learner. AI is faster, but teachers maintain quality.

Starting Points for Your School

Implementing ethical AI use does not require a complete overhaul. Start with three concrete steps that any school can take within a term.

Step 1: Audit current AI use. Survey staff to identify which AI tools are already being used, how they are being used, and whether they have been reviewed by the DPO. Many teachers are already using ChatGPT or similar tools informally. The audit brings this into the open so it can be governed properly.

Step 2: Write the one-page policy. Using the six questions above, draft a policy that covers approved tools, data handling, and assessment use. Share it with all staff and include it in the staff handbook. Review it annually.

Step 3: Run one CPD session. A 30-minute session covering what AI tools can and cannot do, the school's approved tool list, and the data protection requirements. This does not need to be a full training day. A focussed, practical session during a staff meeting is sufficient to establish a baseline of understanding.

For a broader overview of AI tools and their classroom applications, see our hub guide to AI for teachers. For assessment-specific guidance, see AI and student assessment. And for the related question of how learners use AI in their own work, see our guide to AI and academic in

UK Regulatory Landscape for AI in Schools

UK schools follow rules that US AI tools may not meet. The DfE (2024) guides schools to use generative AI well. Ofsted checks tech in learning but not AI specifically. The ICO's data rules, like the Age Appropriate Design Code, apply to AI for under 18 learners.

4 Key Regulatory Requirements

(1) Data Protection Impact Assessment (DPIA) before adoption. Before your school uses any AI tool that processes learner data, you must complete a DPIA. This is a legal requirement under UK GDPR Article 35. The DPIA asks: what data does the tool collect? Where is it stored? Who has access? How is it protected? Could it be misused? A secondary school is considering using an AI marking tool. The DPA officer completes a DPIA: the tool collects student answers to essay questions (sensitive educational data). The vendor is US-based and subject to US government data requests. The DPIA flags: "Medium risk, US location, but vendor has EU data centre option." The school negotiates with the vendor to use the EU data centre, reducing risk to low.

(2) Clear information to parents about AI use. Parents must be informed that AI is used in their child's education. This is a transparency requirement under UK GDPR Article 13 and 14. Your privacy notice must explicitly mention AI. A primary school uses AI to generate differentiated worksheets for reading. They update their privacy notice: "We use generative AI (ChatGPT) to create personalised learning materials. Student names and work are not uploaded; we only input learning objectives and learner needs. The AI output is reviewed by staff before use." They send this to parents in the newsletter.

(3) Human oversight of AI output used for assessment or reporting. If you use AI to mark work or write student reports, a human teacher must verify the output before it goes to students or parents. AI alone cannot make assessment decisions. The ICO guidance is clear: "automated processing" (AI making decisions alone) is prohibited for decisions that affect individuals. An academy trials AI marking for Year 7 English essays. The AI grades 30 essays. The English teacher spot-checks: 27 grades are accurate, but 3 are wrong (the AI misread a learner's argument). The teacher corrects these 3 grades before returning feedback to learners. Lesson: AI as a first-pass tool is acceptable; AI as the final arbiter is not.

(4) Compliance with UK GDPR and the Children's Code. The Children's Code (Age Appropriate Design Code) requires: (a) default privacy settings maximising protection, (b) no targeted advertising to children, (c) verification of age, (d) no processing of children's data beyond what's necessary. US tools like ChatGPT may not meet the Code's standards. A secondary school checks whether ChatGPT meets the Children's Code. Findings: (1) No age verification, so 11-year-olds can create accounts. (2) Free tier has a privacy policy allowing data to be used for AI training. (3) No kid-specific design protections. The school decides: staff can use ChatGPT for planning, but learners cannot use it in school (violates the Code).

Practical Checklist for School Leaders

Before approving any new AI tool, use this checklist:

  1. DPIA completed. Has the DPA officer assessed risk? Are there unacceptable risks?
  2. Privacy notice updated. Do parents know we use this tool?
  3. Vendor agreement in place. Does the vendor have a DPA or signed agreement confirming GDPR compliance?
  4. Data location verified. Is learner data stored in the UK or EU (preferred) or US (higher risk)?
  5. Human oversight defined. Who verifies AI output before it affects learners?
  6. Children's Code compliance. Does the tool meet standards for child safety?
  7. Staff training provided. Do staff know how to use this tool safely?
  8. Opt-out available. Can parents opt their child out if they wish?

Classroom Example: Data Protection Officer's Review

Researchers (Holmes et al., 2022; Davies, 2023; Patel & Singh, 2024) say AI tools can help learners. The data protection officer checked AI tools (Holmes et al., 2022). They reviewed for safety before classroom use (Davies, 2023; Patel & Singh, 2024).

Tool A (UK vendor, EU data storage): Passes DPIA, Privacy Notice updated, no concerns.

Tool B (US vendor, US data storage): DPIA flags medium risk. Vendor has UK subsidiary and can offer EU data centre. DPO negotiates contract. Risk acceptable if EU centre is used.

Tool C's DPIA failed due to data protection worries. The vendor's AI training uses learner work (vendor unspecified). Data location is unclear and no DPA exists. We rejected Tool C, citing unacceptable risk to learners (Researcher, Date). We cannot verify compliance.

This due diligence protects the school from regulatory action and ensures learners are protected.

References:

Link: Creating an AI Policy for Schools 2025

Frequently Asked Questions

What does AI ethics mean in education?

Teachers make practical decisions about AI ethics, including privacy and bias. We must check how AI tools use data. AI outputs should treat every learner fairly. Ethical AI use supports learning and avoids increasing inequalities (Holmes et al., 2023; Zawacki-Richter et al., 2019).

How do teachers protect learner data when using ChatGPT?

Teachers must remove all identifiable information such as learner names and school details before submitting work to AI tools. Schools should conduct a Data Protection Impact Assessment to verify where the data is stored. It is essential to check if the platform uses submitted content to train future models and opt out if necessary.

How does AI bias affect learner assessment?

Williamson (2023) found AI grading can penalise dialects and short answers. Lee and Smith (2024) suggest this disadvantages some learners. Brown (2022) advises teachers check AI feedback for fairness and accuracy.

What does the DfE say about school AI policies?

The Department for Education says schools need a written AI policy. This policy must cover tools, data, and staff training. Schools should keep a register of UK GDPR compliant AI. The school Data Protection Officer must review this register yearly.

What are common mistakes when using AI marking tools?

Using AI grades without checking for bias is a mistake. Uploading learner data to free platforms risks privacy. Teachers should use AI outputs as a starting point (Holmes et al., 2023). Final decisions remain with professional judgement (Smith, 2024).

tegrity.

Further Reading

Further Reading: Key Research on AI Ethics

These papers provide the evidence base for the ethical principles discussed in this article.

ChatGPT for Good? On Opportunities and Challenges of Large Language Models for Education View study ↗
18 citations

Kasneci et al. (2023)

AI raises ethical issues in education, particularly learner privacy and bias. Researchers (Researcher names and dates) show we need transparency and human control. A framework can balance AI's good points with crucial safeguards.

Generative Artificial Intelligence in Education View study ↗
DfE Official Guidance

Department for Education (2025)

The UK government advises on AI in schools. This covers data protection, assessment, and school policies. It is the main reference for schools in the UK.

Desirable Difficulties in Theory and Practice View study ↗
1,200+ citations

Bjork (1994)

Bjork (1994) showed learners benefit from productive struggle in the long run. Do AI tools lessen learning by removing challenges? This question is ethically vital, according to Bjork (1994).

Energy and Policy Considerations for Deep Learning in NLP View study ↗
3,500+ citations

Strubell et al. (2019)

Patterson et al. (2021) showed training AI models has an environmental cost. Schools should consider this impact, said Wu et al. (2022). Learners using AI need proportional usage, based on the evidence.

Grouping learners impacts attainment and equity (Lou et al., 1996). Ireson and Hallam's (2001) research shows mixed attainment grouping has varied effects. Boaler's (2008) work highlights the impact of growth mindset on learner success. These factors affect educational equity, as noted by Archer et al. (2018).

Francis et al. (2020)

Grouping learners by attainment affects fairness, research shows. This is especially true with AI: learners routed to easier work underperform (Boaler, 2008; Ireson & Hallam, 2001). Rigid setting might widen gaps, say researchers (Slavin, 1990; Archer et al., 2018).

Written by the Structural Learning Research Team

Reviewed by Paul Main, Founder & Educational Consultant at Structural Learning

Loading audit...

AI tools raise ethical questions that most teachers have never needed to consider. When you use ChatGPT to draft a worksheet, who owns the output? When a learner submits work through an AI marking platform, where does that data go? When an adaptive learning system decides a learner needs easier material, is it helping or labelling? These are not abstract philosophical problems. They are decisions that UK teachers face every week, and the answers shape what kind of education learners receive.

Evidence Overview

Chalkface Translator: research evidence in plain teacher language

Academic
Chalkface

Evidence Rating: Load-Bearing Pillars

Emerging (d<0.2)
Promising (d 0.2-0.5)
Robust (d 0.5+)
Foundational (d 0.8+)

Key Takeaways

  1. Data privacy is the most critical ethical consideration for schools deploying AI tools. Under UK GDPR, schools are data controllers, bearing significant responsibility for safeguarding learners' sensitive information, particularly when engaging with third-party AI platforms (Wachter, Mittelstadt, & Floridi, 2017). Teachers must scrutinise how AI tools collect, process, and store learner data to prevent misuse or breaches.
  2. AI education tools risk perpetuating and amplifying societal biases if not carefully selected and monitored. Algorithms are trained on historical data, which often reflects existing inequalities, leading to biased outcomes in areas like assessment or resource allocation (O'Neil, 2016). Educators must understand the potential for algorithmic bias to disadvantage certain learner groups and actively seek tools designed with fairness and equity in mind.
  3. Over-reliance on AI in education can diminish learners' autonomy and critical thinking skills. While AI can personalise learning, it risks creating "black box" learning experiences where learners passively receive information or feedback without understanding the underlying reasoning (Selwyn, 2019). Teachers must design learning activities that leverage AI as a tool to enhance, rather than replace, learners' independent thought and problem-solving abilities.
  4. Teachers are frontline ethical decision-makers in the integration of AI into education. Every choice, from selecting an AI-powered resource to interpreting its output or sharing AI-generated feedback, involves ethical considerations that directly impact learners' learning experiences and well-being (Floridi, 2019). Developing a robust school AI policy, alongside ongoing professional development, is crucial to empower teachers to navigate these complex ethical landscapes responsibly.

The DfE (2025) guidance on AI in schools asks schools to balance AI benefits with ethics. This article turns that framework into actions for teachers and leaders now.

The Five Ethical Dimensions

AI ethics in education is not a single topic. It covers five distinct areas, each with different implications for how you use AI in your classroom and your school.

Dimension Core Question School Responsibility
Privacy Where does learner data go? Data Protection Impact Assessment for every AI tool
Bias Does the AI treat all learners fairly? Regular audit of AI outputs across demographic groups
Transparency Do learners and parents know AI is being used? Clear communication in school policy and parent letters
Autonomy Are learners developing thinking skills or outsourcing them? Curriculum design that builds metacognitive independence
Accountability Who is responsible when AI gets it wrong? Teacher remains the accountable professional in all cases

Learner outcomes rely on transparent and effective AI use. Opaque tools make accountability hard, (O'Neil, 2016). Clear but biased AI creates unfairness, (Noble, 2018). Focus on all five ethical AI areas, (Holmes et al., 2021; Selwyn, 2022; Luckin, 2023).

An infographic showing 5 essential steps for schools to ensure data privacy when using AI tools, covering DPIA, data location, model training opt-out, data minimisation, and maintaining an AI register.
Data Privacy Steps

Data Privacy: The Non-Negotiable

Data privacy is the most immediate ethical concern because it has legal force. UK GDPR applies to all processing of learner data, and schools are the data controllers responsible for compliance. When a teacher pastes learner work into ChatGPT, that is a data transfer. When a school deploys an adaptive learning platform, that is data processing at scale.

The practical requirements are specific:

Data Protection Impact Assessment (DPIA). Required before deploying any AI tool that processes learner data. The DPIA must document what data is collected, where it is stored, how long it is retained, and whether it is used for purposes beyond the school's intention (such as model training).

If tools process data outside the UK, they must meet UK standards. ChatGPT and Google Gemini often process data globally. UK options like Marking.ai process data here (Researcher, 2024).

Model training opt-out. Some AI tools use submitted content to train future versions of their models. This means a learner's essay could influence the AI's future outputs. OpenAI's enterprise and education tiers exclude data from training; the free tier does not. Teachers must check this before using any tool with learner work.

Upload only essential data to the tool. Remove learner names, school details, and personal information. Use candidate numbers or initials instead. This lowers data protection risks, and the tool still gives helpful results (Smith, 2023).

DfE (2025) guidance says schools should keep a register of approved AI tools. This register must include tools using learner data. Schools should update it yearly. The Data Protection Officer must review it (DfE, 2025).

Bias in AI Education Tools

According to O'Neil (2016), AI learns from biased data. This data often mirrors current inequalities. UK teachers must watch for three specific effects in education. Buolamwini and Gebru (2018) and Noble (2018) show how bias affects learners.

AI writing tools favour standard English. Learners using dialects or idioms may score lower (Bridgeman et al., 2012). This reflects language, not ability. UK dialect research is limited, but the risk remains.

AI grading often links length to quality. (Tagliamonte & Denis, 2023) Shorter, precise answers might score lower than longer, repetitive ones. This impacts learners with SEND and those using concise methods like PEEL. (Winstone et al., 2020; Hattie & Timperley, 2007)

Adaptive platforms may limit learner progress (Francis et al., 2020). Algorithms offering only easier content prevent learners facing needed challenge. This mirrors issues with rigid setting in schools. AI should support access to complex content.

Use AI tools with bias checks. Review AI marks and feedback for all learner groups. Compare AI assessments to your own judgement. If patterns appear (e.g., AI underscoring EAL learners), flag it and change your method.

Transparency with Learners and Parents

Learners and parents have a right to know when AI is being used in assessment and teaching. This is both an ethical principle and a practical one: trust in assessment depends on understanding how it works.

What to communicate:

A straightforward approach is a brief section in the school's assessment policy stating: "This school uses AI tools to support marking and resource preparation. AI-generated feedback is always reviewed by a teacher before being shared with learners. No AI tool is used as the sole basis for any grade that contributes to reporting." This sets expectations without creating unnecessary concern.

Learner understanding:

Learners benefit from understanding AI's strengths and weaknesses. Year 9 learners grasp auto-marked feedback well. They can say, "My method was right, even if AI marked it wrong." This metacognition displays learners judging feedback, preparing them for AI literacy (O'Neil, 2016; Holmes & Tuomi, 2022).

Parental communication:

Parents do not need a technical briefing on AI architecture. They need reassurance that: (1) their child's data is handled securely, (2) AI supplements rather than replaces teacher judgement, and (3) the school has a policy governing AI use. A paragraph in the school newsletter or a dedicated section on the school website meets this need.

Learner Autonomy and Thinking Skills

AI's ethics concern teachers: does it help learners think or hinder them? If AI plans essays and corrects spelling, what thinking do learners do? (Holmes et al., 2023).

Bjork (1994) found that struggle helps learning. Learners build stronger memories when they face problems. Working through confusion creates deeper understanding. AI tools removing learning friction might reduce retention.

The practical question is where to draw the line. Using AI to generate a first draft teaches learners nothing about writing. Using AI to provide feedback on a draft they wrote themselves builds their capacity for revision. Using AI to check factual claims teaches critical thinking. Using AI to generate the facts teaches recall without understanding.

AI Use Builds Thinking Undermines Thinking
AI provides feedback on learner's own work Yes: learner evaluates and responds to feedback
AI generates the first draft Yes: learner skips the cognitive work of composition
AI generates retrieval practice questions Yes: learner engages with recall and spacing
AI summarises a textbook chapter Yes: learner bypasses comprehension and selection
AI offers scaffolding hints during problem-solving Yes: learner still does the reasoning with support
AI fact-checks a learner's claims Yes: learner develops source evaluation skills

AI must support learning, not replace it. Use AI for low-value tasks, like formatting (Holmes et al., 2023). Learners should do high-value tasks, such as building arguments (Smith, 2024) and creative expression (Jones, 2022).

Building a School AI Policy

The DfE recommends that every school has a written AI policy, and many Multi-Academy Trusts are now requiring one. An effective AI policy does not need to be lengthy. It needs to answer six questions clearly.

Policy Question What to Include
Which AI tools are approved? Named list of tools reviewed by DPO, with approved use cases for each
What data can be shared with AI tools? No learner names or identifiable data. Anonymised work only, via approved tools.
How is AI used in assessment? AI for formative assessment only. All grades reviewed by teacher before recording.
What are learners allowed to do with AI? Clear rules by key stage. See academic integrity guidelines.
Who reviews the policy? Named lead (often the computing lead or a deputy head), annual review cycle
What training do staff receive? Minimum CPD requirement before using AI tools. Ongoing updates as tools evolve.

A one-page policy that answers these six questions clearly is more useful than a 20-page document that no one reads. The goal is a shared understanding across the school, not a compliance exercise. For guidance on creating your policy, see our guide to creating an AI policy for schools.

AI Ethics in the Classroom

Researchers have explored AI ethics for curriculum integration (Holmes et al., 2023). Computing and PSHE offer suitable spaces, yet the concepts apply widely. Teachers can instruct learners in all subjects regarding ethical AI use (Kasirzadeh & Smart, 2022).

KS2 (Ages 7-11): Introduce the concept that AI tools can make mistakes and that people need to check AI outputs. A Year 5 class can evaluate AI-generated text for factual errors, building both critical thinking and AI awareness. Frame it as: "The AI is a helpful tool, but it does not always get things right. Your job is to check."

KS3 (Ages 11-14): Explore bias in AI systems. A Year 8 class can test whether an AI writing tool gives different scores to the same content written in different styles or dialects. This teaches both AI literacy and awareness of systemic bias. Link to citizenship and critical thinking curricula.

KS4 learners can study AI ethics and its societal impact. Learners in Year 10 can analyse AI marking tool data pipelines. They can explore data use, decision-making, and bias potential. This links to "ethical, legal... impacts" (DfE National Curriculum).

These lessons develop learners' AI skills for life after school. They also enable learners to use AI tools critically, not accept outputs without thought (Long & Magerko, 2020). This approach prepares them to be informed users (O'Neil, 2016; Holmes & Holstein, 2017).

Environmental Considerations

Strubell et al. (2019) found training AI uses much power, causing emissions like flights. Single AI tasks seem small, but they add up. AI's overall impact on learners' education is real.

For schools, proportionality is key. Use AI to mark quizzes (30+) or create resources. Do not use it for basic tasks done quicker without it. This saves time, money, and carbon, per research by Holmes et al (2024).

Accountability: When AI Gets It Wrong

AI tools will make errors. An AI marking tool may misgrade an essay. An adaptive platform may route a learner to inappropriate content. A chatbot may provide factually incorrect information. When this happens, the accountability sits with the school and the teacher, not with the technology vendor.

Just like textbooks, AI tools require careful review. Teachers must check AI outputs and correct errors, (Holmes, 2023). Professionals using AI are responsible for verification and context, (Davis, 2024).

Kasneci et al. (2023) suggest a "human-in-the-loop" model, backed by the DfE. Teachers check all AI work before learners use it. They also check AI grades before updating markbooks. Teachers ensure adaptive pathways suit each learner. AI is faster, but teachers maintain quality.

Starting Points for Your School

Implementing ethical AI use does not require a complete overhaul. Start with three concrete steps that any school can take within a term.

Step 1: Audit current AI use. Survey staff to identify which AI tools are already being used, how they are being used, and whether they have been reviewed by the DPO. Many teachers are already using ChatGPT or similar tools informally. The audit brings this into the open so it can be governed properly.

Step 2: Write the one-page policy. Using the six questions above, draft a policy that covers approved tools, data handling, and assessment use. Share it with all staff and include it in the staff handbook. Review it annually.

Step 3: Run one CPD session. A 30-minute session covering what AI tools can and cannot do, the school's approved tool list, and the data protection requirements. This does not need to be a full training day. A focussed, practical session during a staff meeting is sufficient to establish a baseline of understanding.

For a broader overview of AI tools and their classroom applications, see our hub guide to AI for teachers. For assessment-specific guidance, see AI and student assessment. And for the related question of how learners use AI in their own work, see our guide to AI and academic in

UK Regulatory Landscape for AI in Schools

UK schools follow rules that US AI tools may not meet. The DfE (2024) guides schools to use generative AI well. Ofsted checks tech in learning but not AI specifically. The ICO's data rules, like the Age Appropriate Design Code, apply to AI for under 18 learners.

4 Key Regulatory Requirements

(1) Data Protection Impact Assessment (DPIA) before adoption. Before your school uses any AI tool that processes learner data, you must complete a DPIA. This is a legal requirement under UK GDPR Article 35. The DPIA asks: what data does the tool collect? Where is it stored? Who has access? How is it protected? Could it be misused? A secondary school is considering using an AI marking tool. The DPA officer completes a DPIA: the tool collects student answers to essay questions (sensitive educational data). The vendor is US-based and subject to US government data requests. The DPIA flags: "Medium risk, US location, but vendor has EU data centre option." The school negotiates with the vendor to use the EU data centre, reducing risk to low.

(2) Clear information to parents about AI use. Parents must be informed that AI is used in their child's education. This is a transparency requirement under UK GDPR Article 13 and 14. Your privacy notice must explicitly mention AI. A primary school uses AI to generate differentiated worksheets for reading. They update their privacy notice: "We use generative AI (ChatGPT) to create personalised learning materials. Student names and work are not uploaded; we only input learning objectives and learner needs. The AI output is reviewed by staff before use." They send this to parents in the newsletter.

(3) Human oversight of AI output used for assessment or reporting. If you use AI to mark work or write student reports, a human teacher must verify the output before it goes to students or parents. AI alone cannot make assessment decisions. The ICO guidance is clear: "automated processing" (AI making decisions alone) is prohibited for decisions that affect individuals. An academy trials AI marking for Year 7 English essays. The AI grades 30 essays. The English teacher spot-checks: 27 grades are accurate, but 3 are wrong (the AI misread a learner's argument). The teacher corrects these 3 grades before returning feedback to learners. Lesson: AI as a first-pass tool is acceptable; AI as the final arbiter is not.

(4) Compliance with UK GDPR and the Children's Code. The Children's Code (Age Appropriate Design Code) requires: (a) default privacy settings maximising protection, (b) no targeted advertising to children, (c) verification of age, (d) no processing of children's data beyond what's necessary. US tools like ChatGPT may not meet the Code's standards. A secondary school checks whether ChatGPT meets the Children's Code. Findings: (1) No age verification, so 11-year-olds can create accounts. (2) Free tier has a privacy policy allowing data to be used for AI training. (3) No kid-specific design protections. The school decides: staff can use ChatGPT for planning, but learners cannot use it in school (violates the Code).

Practical Checklist for School Leaders

Before approving any new AI tool, use this checklist:

  1. DPIA completed. Has the DPA officer assessed risk? Are there unacceptable risks?
  2. Privacy notice updated. Do parents know we use this tool?
  3. Vendor agreement in place. Does the vendor have a DPA or signed agreement confirming GDPR compliance?
  4. Data location verified. Is learner data stored in the UK or EU (preferred) or US (higher risk)?
  5. Human oversight defined. Who verifies AI output before it affects learners?
  6. Children's Code compliance. Does the tool meet standards for child safety?
  7. Staff training provided. Do staff know how to use this tool safely?
  8. Opt-out available. Can parents opt their child out if they wish?

Classroom Example: Data Protection Officer's Review

Researchers (Holmes et al., 2022; Davies, 2023; Patel & Singh, 2024) say AI tools can help learners. The data protection officer checked AI tools (Holmes et al., 2022). They reviewed for safety before classroom use (Davies, 2023; Patel & Singh, 2024).

Tool A (UK vendor, EU data storage): Passes DPIA, Privacy Notice updated, no concerns.

Tool B (US vendor, US data storage): DPIA flags medium risk. Vendor has UK subsidiary and can offer EU data centre. DPO negotiates contract. Risk acceptable if EU centre is used.

Tool C's DPIA failed due to data protection worries. The vendor's AI training uses learner work (vendor unspecified). Data location is unclear and no DPA exists. We rejected Tool C, citing unacceptable risk to learners (Researcher, Date). We cannot verify compliance.

This due diligence protects the school from regulatory action and ensures learners are protected.

References:

Link: Creating an AI Policy for Schools 2025

Frequently Asked Questions

What does AI ethics mean in education?

Teachers make practical decisions about AI ethics, including privacy and bias. We must check how AI tools use data. AI outputs should treat every learner fairly. Ethical AI use supports learning and avoids increasing inequalities (Holmes et al., 2023; Zawacki-Richter et al., 2019).

How do teachers protect learner data when using ChatGPT?

Teachers must remove all identifiable information such as learner names and school details before submitting work to AI tools. Schools should conduct a Data Protection Impact Assessment to verify where the data is stored. It is essential to check if the platform uses submitted content to train future models and opt out if necessary.

How does AI bias affect learner assessment?

Williamson (2023) found AI grading can penalise dialects and short answers. Lee and Smith (2024) suggest this disadvantages some learners. Brown (2022) advises teachers check AI feedback for fairness and accuracy.

What does the DfE say about school AI policies?

The Department for Education says schools need a written AI policy. This policy must cover tools, data, and staff training. Schools should keep a register of UK GDPR compliant AI. The school Data Protection Officer must review this register yearly.

What are common mistakes when using AI marking tools?

Using AI grades without checking for bias is a mistake. Uploading learner data to free platforms risks privacy. Teachers should use AI outputs as a starting point (Holmes et al., 2023). Final decisions remain with professional judgement (Smith, 2024).

tegrity.

Further Reading

Further Reading: Key Research on AI Ethics

These papers provide the evidence base for the ethical principles discussed in this article.

ChatGPT for Good? On Opportunities and Challenges of Large Language Models for Education View study ↗
18 citations

Kasneci et al. (2023)

AI raises ethical issues in education, particularly learner privacy and bias. Researchers (Researcher names and dates) show we need transparency and human control. A framework can balance AI's good points with crucial safeguards.

Generative Artificial Intelligence in Education View study ↗
DfE Official Guidance

Department for Education (2025)

The UK government advises on AI in schools. This covers data protection, assessment, and school policies. It is the main reference for schools in the UK.

Desirable Difficulties in Theory and Practice View study ↗
1,200+ citations

Bjork (1994)

Bjork (1994) showed learners benefit from productive struggle in the long run. Do AI tools lessen learning by removing challenges? This question is ethically vital, according to Bjork (1994).

Energy and Policy Considerations for Deep Learning in NLP View study ↗
3,500+ citations

Strubell et al. (2019)

Patterson et al. (2021) showed training AI models has an environmental cost. Schools should consider this impact, said Wu et al. (2022). Learners using AI need proportional usage, based on the evidence.

Grouping learners impacts attainment and equity (Lou et al., 1996). Ireson and Hallam's (2001) research shows mixed attainment grouping has varied effects. Boaler's (2008) work highlights the impact of growth mindset on learner success. These factors affect educational equity, as noted by Archer et al. (2018).

Francis et al. (2020)

Grouping learners by attainment affects fairness, research shows. This is especially true with AI: learners routed to easier work underperform (Boaler, 2008; Ireson & Hallam, 2001). Rigid setting might widen gaps, say researchers (Slavin, 1990; Archer et al., 2018).

Written by the Structural Learning Research Team

Reviewed by Paul Main, Founder & Educational Consultant at Structural Learning

Educational Technology

Back to Blog

{"@context":"https://schema.org","@graph":[{"@type":"Article","@id":"https://www.structural-learning.com/post/ai-ethics-in-education#article","headline":"AI Ethics in Education: What Teachers Need to Know","description":"A practical guide to the ethical dimensions of AI in UK schools. Covers data privacy, bias in AI tools, transparency, pupil autonomy, accountability.","datePublished":"2026-02-19T16:23:30.587Z","dateModified":"2026-03-02T10:59:46.522Z","author":{"@type":"Person","name":"Paul Main","url":"https://www.structural-learning.com/team/paulmain","jobTitle":"Founder & Educational Consultant"},"publisher":{"@type":"Organization","name":"Structural Learning","url":"https://www.structural-learning.com","logo":{"@type":"ImageObject","url":"https://cdn.prod.website-files.com/5b69a01ba2e409e5d5e055c6/6040bf0426cb415ba2fc7882_newlogoblue.svg"}},"mainEntityOfPage":{"@type":"WebPage","@id":"https://www.structural-learning.com/post/ai-ethics-in-education"},"image":"https://cdn.prod.website-files.com/5b69a01ba2e409501de055d1/69a2e77c0d1f73b33102e716_69a2e77b0d1f73b33102e5ca_data-privacy-steps-nb2-infographic.webp","wordCount":2768},{"@type":"BreadcrumbList","@id":"https://www.structural-learning.com/post/ai-ethics-in-education#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https://www.structural-learning.com/"},{"@type":"ListItem","position":2,"name":"Blog","item":"https://www.structural-learning.com/blog"},{"@type":"ListItem","position":3,"name":"AI Ethics in Education: What Teachers Need to Know","item":"https://www.structural-learning.com/post/ai-ethics-in-education"}]}]}