AI and Academic Integrity: A Teacher's Guide

Updated on  

February 19, 2026

AI and Academic Integrity: A Teacher's Guide

|

February 19, 2026

A practical guide to maintaining academic integrity in an age of AI. Covers why detection tools fail, exam board positions, AI-resistant assessment design, defining acceptable AI use, and building school policy.

A Year 11 pupil submits a history essay that reads fluently, covers all the assessment criteria, and contains no spelling errors. It also contains no personality, no misunderstandings, and no evidence of the specific struggle that pupil had with source analysis last week. The teacher suspects AI involvement but cannot prove it. This scenario plays out in thousands of UK classrooms every week, and the response to it will shape how we assess learning for the next decade.

Key Takeaways

  1. AI detection tools are unreliable: No current tool can accurately distinguish AI-generated text from human writing. False positive rates of 10-20% make them unsuitable as sole evidence of misconduct (Weber-Wulff et al., 2023).
  2. Assessment redesign is more effective than detection: Tasks that require personal reflection, specific classroom experiences, or process documentation are inherently resistant to AI completion.
  3. Clear policy prevents conflict: Schools that define acceptable and unacceptable AI use before incidents occur handle cases more fairly than those responding reactively.
  4. The goal is learning, not catching: The best response to AI in academic work is designing assessment that requires pupils to demonstrate understanding rather than produce polished text.

The Department for Education's 2025 guidance addresses academic integrity directly, noting that "schools should adapt their assessment practices to ensure they continue to assess genuine pupil understanding in an era of widely available AI tools" (DfE, 2025). This article provides the practical framework for doing so, from detection through to assessment redesign.

Why AI Detection Does Not Work

AI detection tools (GPTZero, Turnitin AI, Originality.ai) attempt to identify text generated by large language models. The research consensus is that they are not reliable enough to use as evidence in academic misconduct proceedings.

Weber-Wulff et al. (2023) conducted the largest independent evaluation of AI detection tools, testing 14 tools across 126 documents. Key findings:

False positive rates are unacceptable. Detection tools incorrectly flagged human-written text as AI-generated in 10-20% of cases. In a school of 1,000 pupils, this means 100-200 pupils could be falsely accused each year. For EAL pupils writing in a non-native language, false positive rates were even higher because their writing patterns more closely resemble AI output.

Simple edits defeat detection. Replacing a few words, adjusting sentence structure, or adding deliberate errors reduces detection accuracy to near zero. A pupil who pastes AI text and makes minor changes is effectively undetectable by current tools.

Detection tools themselves use AI. The irony of using an AI tool to detect AI-generated text creates a recursive problem. The detection AI has its own biases and limitations, and its confidence scores are not probabilities in any statistically meaningful sense. A "95% AI-generated" label does not mean a 95% chance the text was AI-generated.

The practical conclusion: do not use AI detection tools as the sole basis for accusing a pupil of misconduct. They can be one data point among many (combined with knowledge of the pupil's previous work, in-class performance, and the submission process), but they should never be the deciding factor.

What Exam Boards Say

UK exam boards have issued guidance on AI use in coursework and non-examined assessment (NEA). The positions vary but share common ground.

Exam Board Position on AI in NEA/Coursework Detection Approach
AQA AI-generated content submitted as pupil's own is malpractice. AI may be used for research if acknowledged. Teacher authentication + process evidence
Edexcel (Pearson) Submitting AI output as own work is malpractice. Schools must have policies on AI use. Supervised elements + draft review
OCR AI assistance must be declared. Unacknowledged use treated as malpractice. Declaration forms + teacher verification
WJEC/Eduqas AI use beyond research and planning constitutes malpractice. Teacher authentication of candidate work

The common thread: all boards rely primarily on teacher authentication rather than detection software. Teachers are expected to know their pupils' work well enough to identify submissions that do not match the pupil's demonstrated ability. This places a significant responsibility on teachers, but it is more reliable than algorithmic detection.

For internal school assessment (not exam board NEA), the school sets its own policy. The key principle from exam boards applies equally: authenticate through knowledge of the pupil, not through software.

Designing AI-Resistant Assessment

The most effective response to AI academic integrity concerns is not detection but prevention. Assessment tasks that require specific, personal, process-based evidence are inherently difficult for AI to complete on a pupil's behalf.

Strategy How It Works Example
Process portfolios Pupils submit drafts, notes and reflections alongside final work Year 10 English: submit planning notes, first draft with teacher feedback, and final version
In-class components Part of the assessment is completed under supervised conditions Year 9 history: research at home, write under exam conditions
Personal reflection Questions require reference to the pupil's own experience or classroom activity "Explain how the experiment we conducted in Tuesday's lesson supported or contradicted your hypothesis"
Oral defence Pupils verbally explain their work and answer follow-up questions Year 11 science: 5-minute viva on their investigation, recorded for moderation
Iterative feedback Teacher marks drafts and pupil must respond to specific feedback points "In your next draft, address the weakness I identified in paragraph 3"
Specific source constraints Pupils must use only named sources provided by the teacher "Using only Sources A, B and C from our lesson pack, evaluate..."

None of these strategies eliminate the possibility of AI misuse entirely. A determined pupil can still use AI and then fabricate process evidence. The goal is not a foolproof system but an assessment approach where genuine engagement is the easiest route to a good grade, and AI misuse requires more effort than doing the work properly.

Defining Acceptable AI Use

Schools need a clear spectrum of AI use, from fully acceptable to clearly unacceptable. Without this, every incident becomes an argument about interpretation. The following framework, adapted from several UK schools that have already implemented AI policies, provides a workable starting point.

Category Examples Policy Position
Acceptable Spell-checking, grammar correction, thesaurus-style word suggestions Permitted without declaration
Permitted with declaration AI for initial research, brainstorming ideas, explaining a concept the pupil does not understand Must be acknowledged. Pupil writes own content.
Not permitted AI generates text that the pupil submits as their own, AI produces answers to assessment questions, AI rewrites the pupil's draft entirely Academic misconduct. Treated under existing malpractice policy.

The middle category (permitted with declaration) is where most of the complexity lies. A pupil who asks ChatGPT to explain photosynthesis so they can understand it better is using AI as a learning tool. A pupil who asks ChatGPT to write their photosynthesis essay is submitting someone else's work. The difference is whether the pupil engaged in the cognitive work of transforming understanding into their own written argument.

Communicate this framework to pupils explicitly, with examples relevant to their year group and subject. A poster on the classroom wall, a slide at the start of an assessment, and a line on the assessment cover sheet all reinforce the expectation.

Conversations, Not Confrontations

When a teacher suspects AI-generated work, the initial response should be a conversation, not a formal accusation. The purpose is to establish what the pupil understands and how they produced their work.

Effective questions:

"Can you talk me through how you structured this essay?" A pupil who wrote the work themselves can describe their thinking process. A pupil who submitted AI output often cannot explain why they made specific choices.

"What was the hardest part of this piece?" Genuine engagement produces genuine struggle. A pupil who reports no difficulty with a complex task is either exceptionally able or did not do the cognitive work.

"I noticed this paragraph uses some sophisticated language. Can you explain what this sentence means in your own words?" A pupil who understands their work can paraphrase it. A pupil who submitted AI output often cannot.

"If I asked you to write the next paragraph right now, on a related topic, could you do that in a similar style?" This tests whether the pupil can produce work of comparable quality under observed conditions.

These conversations serve two purposes: they gather evidence about whether the work is genuinely the pupil's own, and they provide a formative assessment opportunity. Even if the pupil did use AI, the conversation teaches them that understanding matters more than output quality. This builds metacognitive awareness of what genuine learning looks like.

Subject-Specific Challenges

AI academic integrity manifests differently across subjects. The risks and responses need to be tailored.

English: The highest-risk subject because AI excels at producing fluent prose. The most effective countermeasure is process-based assessment: requiring planning notes, annotated drafts and personal reflection alongside the final piece. Ask pupils to write about texts studied in class, referencing specific discussions or activities that an AI could not know about.

Mathematics: Lower risk for homework because AI often generates incorrect working-out for complex problems. Higher risk for statistics coursework where AI can produce plausible analysis. Require pupils to explain their reasoning verbally and ask follow-up questions about alternative methods.

Science: AI can produce technically accurate explanations but struggles with experimental design that references specific apparatus and methods used in the school's laboratory. Frame practical write-ups around "the method we used in class" rather than generic experimental procedures. Require hand-drawn diagrams or photographs of actual experimental setups.

Humanities: AI produces competent but generic historical and geographical analysis. Counter this by requiring reference to specific sources provided in class, asking for personal evaluation ("Do you agree with Source B's interpretation? Explain why"), and building in class-based writing components where the teacher observes the process.

Computing: Paradoxically, the subject most concerned with AI is also the most vulnerable to it. AI can generate functional code with high accuracy. The most effective approach is to require pupils to explain their code line-by-line, modify it on the spot in response to teacher questions, and document their debugging process.

Teaching Pupils to Use AI Well

Rather than treating AI as a threat to be policed, the more productive approach is teaching pupils how to use AI as a legitimate learning tool. This builds the AI literacy skills they will need in further education and employment while maintaining academic standards.

Critical evaluation: Ask pupils to generate an AI response to a question, then evaluate it for accuracy, bias and completeness. This teaches critical thinking about AI outputs and makes clear that AI-generated text is a starting point, not a finished product.

Prompt refinement: Teach pupils that the quality of AI output depends on the quality of the input. A vague prompt produces vague output. A specific, well-structured prompt produces something more useful. This is a thinking skill in itself: articulating what you want requires clarity about what you know and what you need.

Citation and attribution: Establish the expectation that AI assistance is declared, just as pupils cite books and websites. A simple statement ("I used ChatGPT to help me understand the concept of supply and demand, then wrote this explanation in my own words") normalises honest use and discourages concealment.

Comparison exercises: Ask pupils to write a paragraph themselves, then generate an AI paragraph on the same topic, then compare the two. This develops awareness of their own writing voice and the generic quality of AI output. Many pupils discover that their own writing, while less polished, is more interesting and more personal than the AI version.

Building Your School Policy

An academic integrity policy for AI does not need to be a new document. It extends your existing malpractice and plagiarism policy with AI-specific guidance. Include these elements:

1. Definition of AI misuse: "Submitting AI-generated text, code, calculations or other content as your own work without acknowledgement constitutes academic misconduct equivalent to plagiarism."

2. Acceptable use spectrum: The three-tier framework above (acceptable, permitted with declaration, not permitted), with subject-specific examples that pupils can understand.

3. Investigation process: Conversation first, then evidence gathering (comparison with in-class work, verbal questioning, process documentation), then formal outcome if warranted. Never rely solely on detection software.

4. Graduated consequences: First instance: educational conversation and resubmission. Repeated instances: formal academic misconduct procedures. The emphasis should be on learning, not punishment, particularly for younger pupils who may not fully understand the implications of AI use.

5. Staff guidance: How to design AI-resistant assessments, how to conduct integrity conversations, and when to escalate to senior leadership. Include in the staff CPD programme.

For the broader ethical framework within which academic integrity sits, see our guide to AI ethics in education. For assessment design that naturally reduces integrity risks, see AI and student assessment. And for the wider context of AI in teaching, see our hub article on AI for teachers.

Further Reading

Further Reading: Key Research on AI and Academic Integrity

These papers provide the evidence base for the recommendations in this article.

Testing of Detection Tools for AI-Generated Text View study ↗
890+ citations

Weber-Wulff et al. (2023)

The largest independent evaluation of AI text detection tools, testing 14 detectors across 126 documents. Found that no tool reliably distinguished AI-generated from human-written text, with false positive rates of 10-20%. Essential reading for any school considering detection-based approaches.

Generative Artificial Intelligence in Education View study ↗
DfE Official Guidance

Department for Education (2025)

The UK government's formal guidance on AI in schools, including specific sections on academic integrity, assessment design, and school policy requirements. Establishes that schools should adapt assessment practices rather than rely solely on detection.

ChatGPT for Good? On Opportunities and Challenges of Large Language Models for Education View study ↗
2,800+ citations

Kasneci et al. (2023)

Comprehensive analysis of AI in education, including the academic integrity implications of large language models. Discusses the tension between AI as a learning tool and AI as a means of bypassing learning, with recommendations for policy and practice.

Assessment Design in the Age of Artificial Intelligence View study ↗
JISC Guidance

JISC (2024)

Practical guidance from the UK's digital, data and technology agency for education, focusing on how to redesign assessment for an AI-enabled world. Includes specific strategies for authentic assessment and process-based evaluation.

The Power of Feedback View study ↗
9,400+ citations

Hattie and Timperley (2007)

While primarily about feedback, this paper's framework for understanding how assessment drives learning is directly relevant to designing assessment that values the learning process over the final product. The four feedback levels map onto different approaches to AI-resistant assessment design.

Loading audit...

A Year 11 pupil submits a history essay that reads fluently, covers all the assessment criteria, and contains no spelling errors. It also contains no personality, no misunderstandings, and no evidence of the specific struggle that pupil had with source analysis last week. The teacher suspects AI involvement but cannot prove it. This scenario plays out in thousands of UK classrooms every week, and the response to it will shape how we assess learning for the next decade.

Key Takeaways

  1. AI detection tools are unreliable: No current tool can accurately distinguish AI-generated text from human writing. False positive rates of 10-20% make them unsuitable as sole evidence of misconduct (Weber-Wulff et al., 2023).
  2. Assessment redesign is more effective than detection: Tasks that require personal reflection, specific classroom experiences, or process documentation are inherently resistant to AI completion.
  3. Clear policy prevents conflict: Schools that define acceptable and unacceptable AI use before incidents occur handle cases more fairly than those responding reactively.
  4. The goal is learning, not catching: The best response to AI in academic work is designing assessment that requires pupils to demonstrate understanding rather than produce polished text.

The Department for Education's 2025 guidance addresses academic integrity directly, noting that "schools should adapt their assessment practices to ensure they continue to assess genuine pupil understanding in an era of widely available AI tools" (DfE, 2025). This article provides the practical framework for doing so, from detection through to assessment redesign.

Why AI Detection Does Not Work

AI detection tools (GPTZero, Turnitin AI, Originality.ai) attempt to identify text generated by large language models. The research consensus is that they are not reliable enough to use as evidence in academic misconduct proceedings.

Weber-Wulff et al. (2023) conducted the largest independent evaluation of AI detection tools, testing 14 tools across 126 documents. Key findings:

False positive rates are unacceptable. Detection tools incorrectly flagged human-written text as AI-generated in 10-20% of cases. In a school of 1,000 pupils, this means 100-200 pupils could be falsely accused each year. For EAL pupils writing in a non-native language, false positive rates were even higher because their writing patterns more closely resemble AI output.

Simple edits defeat detection. Replacing a few words, adjusting sentence structure, or adding deliberate errors reduces detection accuracy to near zero. A pupil who pastes AI text and makes minor changes is effectively undetectable by current tools.

Detection tools themselves use AI. The irony of using an AI tool to detect AI-generated text creates a recursive problem. The detection AI has its own biases and limitations, and its confidence scores are not probabilities in any statistically meaningful sense. A "95% AI-generated" label does not mean a 95% chance the text was AI-generated.

The practical conclusion: do not use AI detection tools as the sole basis for accusing a pupil of misconduct. They can be one data point among many (combined with knowledge of the pupil's previous work, in-class performance, and the submission process), but they should never be the deciding factor.

What Exam Boards Say

UK exam boards have issued guidance on AI use in coursework and non-examined assessment (NEA). The positions vary but share common ground.

Exam Board Position on AI in NEA/Coursework Detection Approach
AQA AI-generated content submitted as pupil's own is malpractice. AI may be used for research if acknowledged. Teacher authentication + process evidence
Edexcel (Pearson) Submitting AI output as own work is malpractice. Schools must have policies on AI use. Supervised elements + draft review
OCR AI assistance must be declared. Unacknowledged use treated as malpractice. Declaration forms + teacher verification
WJEC/Eduqas AI use beyond research and planning constitutes malpractice. Teacher authentication of candidate work

The common thread: all boards rely primarily on teacher authentication rather than detection software. Teachers are expected to know their pupils' work well enough to identify submissions that do not match the pupil's demonstrated ability. This places a significant responsibility on teachers, but it is more reliable than algorithmic detection.

For internal school assessment (not exam board NEA), the school sets its own policy. The key principle from exam boards applies equally: authenticate through knowledge of the pupil, not through software.

Designing AI-Resistant Assessment

The most effective response to AI academic integrity concerns is not detection but prevention. Assessment tasks that require specific, personal, process-based evidence are inherently difficult for AI to complete on a pupil's behalf.

Strategy How It Works Example
Process portfolios Pupils submit drafts, notes and reflections alongside final work Year 10 English: submit planning notes, first draft with teacher feedback, and final version
In-class components Part of the assessment is completed under supervised conditions Year 9 history: research at home, write under exam conditions
Personal reflection Questions require reference to the pupil's own experience or classroom activity "Explain how the experiment we conducted in Tuesday's lesson supported or contradicted your hypothesis"
Oral defence Pupils verbally explain their work and answer follow-up questions Year 11 science: 5-minute viva on their investigation, recorded for moderation
Iterative feedback Teacher marks drafts and pupil must respond to specific feedback points "In your next draft, address the weakness I identified in paragraph 3"
Specific source constraints Pupils must use only named sources provided by the teacher "Using only Sources A, B and C from our lesson pack, evaluate..."

None of these strategies eliminate the possibility of AI misuse entirely. A determined pupil can still use AI and then fabricate process evidence. The goal is not a foolproof system but an assessment approach where genuine engagement is the easiest route to a good grade, and AI misuse requires more effort than doing the work properly.

Defining Acceptable AI Use

Schools need a clear spectrum of AI use, from fully acceptable to clearly unacceptable. Without this, every incident becomes an argument about interpretation. The following framework, adapted from several UK schools that have already implemented AI policies, provides a workable starting point.

Category Examples Policy Position
Acceptable Spell-checking, grammar correction, thesaurus-style word suggestions Permitted without declaration
Permitted with declaration AI for initial research, brainstorming ideas, explaining a concept the pupil does not understand Must be acknowledged. Pupil writes own content.
Not permitted AI generates text that the pupil submits as their own, AI produces answers to assessment questions, AI rewrites the pupil's draft entirely Academic misconduct. Treated under existing malpractice policy.

The middle category (permitted with declaration) is where most of the complexity lies. A pupil who asks ChatGPT to explain photosynthesis so they can understand it better is using AI as a learning tool. A pupil who asks ChatGPT to write their photosynthesis essay is submitting someone else's work. The difference is whether the pupil engaged in the cognitive work of transforming understanding into their own written argument.

Communicate this framework to pupils explicitly, with examples relevant to their year group and subject. A poster on the classroom wall, a slide at the start of an assessment, and a line on the assessment cover sheet all reinforce the expectation.

Conversations, Not Confrontations

When a teacher suspects AI-generated work, the initial response should be a conversation, not a formal accusation. The purpose is to establish what the pupil understands and how they produced their work.

Effective questions:

"Can you talk me through how you structured this essay?" A pupil who wrote the work themselves can describe their thinking process. A pupil who submitted AI output often cannot explain why they made specific choices.

"What was the hardest part of this piece?" Genuine engagement produces genuine struggle. A pupil who reports no difficulty with a complex task is either exceptionally able or did not do the cognitive work.

"I noticed this paragraph uses some sophisticated language. Can you explain what this sentence means in your own words?" A pupil who understands their work can paraphrase it. A pupil who submitted AI output often cannot.

"If I asked you to write the next paragraph right now, on a related topic, could you do that in a similar style?" This tests whether the pupil can produce work of comparable quality under observed conditions.

These conversations serve two purposes: they gather evidence about whether the work is genuinely the pupil's own, and they provide a formative assessment opportunity. Even if the pupil did use AI, the conversation teaches them that understanding matters more than output quality. This builds metacognitive awareness of what genuine learning looks like.

Subject-Specific Challenges

AI academic integrity manifests differently across subjects. The risks and responses need to be tailored.

English: The highest-risk subject because AI excels at producing fluent prose. The most effective countermeasure is process-based assessment: requiring planning notes, annotated drafts and personal reflection alongside the final piece. Ask pupils to write about texts studied in class, referencing specific discussions or activities that an AI could not know about.

Mathematics: Lower risk for homework because AI often generates incorrect working-out for complex problems. Higher risk for statistics coursework where AI can produce plausible analysis. Require pupils to explain their reasoning verbally and ask follow-up questions about alternative methods.

Science: AI can produce technically accurate explanations but struggles with experimental design that references specific apparatus and methods used in the school's laboratory. Frame practical write-ups around "the method we used in class" rather than generic experimental procedures. Require hand-drawn diagrams or photographs of actual experimental setups.

Humanities: AI produces competent but generic historical and geographical analysis. Counter this by requiring reference to specific sources provided in class, asking for personal evaluation ("Do you agree with Source B's interpretation? Explain why"), and building in class-based writing components where the teacher observes the process.

Computing: Paradoxically, the subject most concerned with AI is also the most vulnerable to it. AI can generate functional code with high accuracy. The most effective approach is to require pupils to explain their code line-by-line, modify it on the spot in response to teacher questions, and document their debugging process.

Teaching Pupils to Use AI Well

Rather than treating AI as a threat to be policed, the more productive approach is teaching pupils how to use AI as a legitimate learning tool. This builds the AI literacy skills they will need in further education and employment while maintaining academic standards.

Critical evaluation: Ask pupils to generate an AI response to a question, then evaluate it for accuracy, bias and completeness. This teaches critical thinking about AI outputs and makes clear that AI-generated text is a starting point, not a finished product.

Prompt refinement: Teach pupils that the quality of AI output depends on the quality of the input. A vague prompt produces vague output. A specific, well-structured prompt produces something more useful. This is a thinking skill in itself: articulating what you want requires clarity about what you know and what you need.

Citation and attribution: Establish the expectation that AI assistance is declared, just as pupils cite books and websites. A simple statement ("I used ChatGPT to help me understand the concept of supply and demand, then wrote this explanation in my own words") normalises honest use and discourages concealment.

Comparison exercises: Ask pupils to write a paragraph themselves, then generate an AI paragraph on the same topic, then compare the two. This develops awareness of their own writing voice and the generic quality of AI output. Many pupils discover that their own writing, while less polished, is more interesting and more personal than the AI version.

Building Your School Policy

An academic integrity policy for AI does not need to be a new document. It extends your existing malpractice and plagiarism policy with AI-specific guidance. Include these elements:

1. Definition of AI misuse: "Submitting AI-generated text, code, calculations or other content as your own work without acknowledgement constitutes academic misconduct equivalent to plagiarism."

2. Acceptable use spectrum: The three-tier framework above (acceptable, permitted with declaration, not permitted), with subject-specific examples that pupils can understand.

3. Investigation process: Conversation first, then evidence gathering (comparison with in-class work, verbal questioning, process documentation), then formal outcome if warranted. Never rely solely on detection software.

4. Graduated consequences: First instance: educational conversation and resubmission. Repeated instances: formal academic misconduct procedures. The emphasis should be on learning, not punishment, particularly for younger pupils who may not fully understand the implications of AI use.

5. Staff guidance: How to design AI-resistant assessments, how to conduct integrity conversations, and when to escalate to senior leadership. Include in the staff CPD programme.

For the broader ethical framework within which academic integrity sits, see our guide to AI ethics in education. For assessment design that naturally reduces integrity risks, see AI and student assessment. And for the wider context of AI in teaching, see our hub article on AI for teachers.

Further Reading

Further Reading: Key Research on AI and Academic Integrity

These papers provide the evidence base for the recommendations in this article.

Testing of Detection Tools for AI-Generated Text View study ↗
890+ citations

Weber-Wulff et al. (2023)

The largest independent evaluation of AI text detection tools, testing 14 detectors across 126 documents. Found that no tool reliably distinguished AI-generated from human-written text, with false positive rates of 10-20%. Essential reading for any school considering detection-based approaches.

Generative Artificial Intelligence in Education View study ↗
DfE Official Guidance

Department for Education (2025)

The UK government's formal guidance on AI in schools, including specific sections on academic integrity, assessment design, and school policy requirements. Establishes that schools should adapt assessment practices rather than rely solely on detection.

ChatGPT for Good? On Opportunities and Challenges of Large Language Models for Education View study ↗
2,800+ citations

Kasneci et al. (2023)

Comprehensive analysis of AI in education, including the academic integrity implications of large language models. Discusses the tension between AI as a learning tool and AI as a means of bypassing learning, with recommendations for policy and practice.

Assessment Design in the Age of Artificial Intelligence View study ↗
JISC Guidance

JISC (2024)

Practical guidance from the UK's digital, data and technology agency for education, focusing on how to redesign assessment for an AI-enabled world. Includes specific strategies for authentic assessment and process-based evaluation.

The Power of Feedback View study ↗
9,400+ citations

Hattie and Timperley (2007)

While primarily about feedback, this paper's framework for understanding how assessment drives learning is directly relevant to designing assessment that values the learning process over the final product. The four feedback levels map onto different approaches to AI-resistant assessment design.

No Posts found.
Back to Blog

{"@context":"https://schema.org","@graph":[{"@type":"Article","headline":"AI and Academic Integrity: A Teacher's Guide","description":"A practical guide to maintaining academic integrity in UK schools in an age of AI tools.","datePublished":"2026-02-19T16:26:01.206Z","dateModified":"2026-02-19T16:26:01.208Z","author":{"@type":"Person","name":"Paul Main","jobTitle":"Founder & Educational Consultant"},"publisher":{"@type":"Organization","name":"Structural Learning"},"wordCount":2403},{"@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https://www.structural-learning.com/"},{"@type":"ListItem","position":2,"name":"Blog","item":"https://www.structural-learning.com/blog"},{"@type":"ListItem","position":3,"name":"AI and Academic Integrity","item":"https://www.structural-learning.com/post/ai-academic-integrity"}]}]}