The Teacher-Architect: Using Historical Logs to PreserveTeacher and pupils engaged in teacher-architect: using historical logs to preserve human agency in ai-assisted

Updated on  

April 3, 2026

The Teacher-Architect: Using Historical Logs to Preserve

Dr. Tony Richardson explains how the Historical Log and Human-AI Dialectic Loop help teachers preserve student agency, audit thinking processes.

Understanding AI in Education

AI worries echo past calculator fears. Educators in the 1970s feared calculators would harm learners' maths skills (Akgun & Toker, 2024). They worried arithmetic offloading would "de-skill" a generation.

A 5-step Human-AI Inquiry Loop: Formulate Prompt, AI Generates Output, Evaluate & Log, Reflect & Redirect, Achieve FAK. Shows iterative learning.
Human-AI Inquiry Loop

Hembree & Dessart (1986) found learners grasp concepts better without calculations. Cuban (1986) suggested technology helps change curricula, not harm it. We can now focus on understanding logic, not just doing sums.

The same principle could apply to AI today. The key difference is that unlike calculators, which provided only static final answers, the Human-AI Dialectic Loop (Richardson & O'Neill, 2026) provides a visible "Audit Trail" of iterative thinking.

Key Takeaways

  1. Generative AI, much like the introduction of electronic calculators, can significantly improve the focus of learning from rote tasks to higher-order conceptual problem-solving. By offloading computational or generative burdens, learners are freed to engage in more complex reasoning and critical thinking, aligning with principles of cognitive load theory where extraneous load is reduced to maximise germane load (Sweller, 1988). This shift allows educators to design curricula that prioritise deeper understanding and application over mere procedural execution.
  2. Establishing clear "audit trails" of thought is paramount for educators to discern authentic learner inquiry and engagement when using AI tools. By encouraging learners to document their process, choices, and reflections, teachers gain insight into their reasoning, much like the diagnostic power of effective formative assessment (Black & Wiliam, 1998). This transparency moves beyond simply assessing the final product, allowing for targeted feedback and the preservation of genuine learning pathways.
  3. Teachers are equiped to act as "teacher-architects," intentionally designing learning environments where AI tools enhance rather than diminish learner agency and critical thinking. This requires a proactive approach to curriculum development and pedagogical innovation, ensuring technology integration supports deeper learning outcomes and fosters learners' metacognitive skills (Fullan, 2001). By shaping how AI is used, educators can preserve the human element of inquiry and problem-solving.
  4. Concerns about AI "de-skilling" learners can be effectively addressed by shifting educational emphasis towards cultivating robust critical inquiry and evaluation skills. Instead of fearing the offloading of routine tasks, educators should focus on teaching learners how to critically assess AI outputs, formulate insightful questions, and engage in independent verification (Paul & Elder, 2007). This approach ensures learners develop the intellectual tools necessary to thrive in an AI-augmented world, moving beyond surface-level engagement.

The Teacher-Architect Model: comparing traditional AI response with the Teacher-Architect approach using the Human-AI Dialectic Loop


The Teacher-Architect Model

From Black Box to Audit Trail

Richardson (date) uses the Human-AI Dialectic Loop to regain thinking skills after AI use. The method views AI as a "Cognitive Adversary". It challenges the learner's logic, not just producing content.

The "Process-Turn" values learning, not just finished writing. Teachers check how learners build understanding (Richardson, date needed). Rigorous questioning, not simple acceptance, builds knowledge (Richardson). Algorithms need clear guidance.

For educators, this raises a critical question: How do we prevent students from passively offloading their thinking to AI, and instead use AI as a tool for deeper learning?

The Audit Trail of Thought

Researchers found that learners stay engaged with AI when using a Historical Log. Documenting their investigation also has benefits (Winne & Perry, 2000). This encourages active thinking for learners (Schwartz et al., 2004).

Cognitive offloading risks obscuring internal thought (Risko & Gilbert, 2016). Logs should not replace thinking. Externalising all feedback makes the learner assess their own thinking. The log makes learners think about their thought patterns with AI output. Logging encourages reflection. Learners decide if AI suggestions are useful (Flavell, 1979).

Richardson et al. (2020) say FAK's value is how a learner justifies their information search path. Logs actively exercise cognition, fighting "passive offloading." Without documentation, learners risk "cognitive atrophy," says Carr (2020). Automated answers may weaken a learner's memory and ability to synthesise information.

Research findings on documentation

Learners who document inquiry remember more and think critically (Ataş & Yildirim, 2024). Research shows this is better than just focusing on the final product. Computer-supported learning helped reveal this effect.

This matters especially in the context of generative AI. The machine's "fluency" often creates a "fluency illusion", where the user believes they understand a topic simply because the AI has summarised it clearly and confidently (Bjork et al., 2013). The Audit Trail disrupts this illusion by requiring students to "show the work" of their logic, much like traditional formative assessment practices.

Mollick (2024) states humans must audit AI outputs. Learners benefit from prompting and refining AI logic. This "Human-in-the-Loop" approach maintains their agency.

Shifting pedagogical gravity

By adopting the Historical Log, the pedagogical "centre of gravity" shifts fundamentally from the final product to the documented evolution of thought. This allows the Teacher-Architect to see the scaffolding of the student's mind.

Dweck (2017) found learners show more motivation when assessment focuses on their process, not just the outcome. Learners with a growth mindset see challenges as part of learning, not signs of failure.

Learners show learning with decision records, creating "forensic" evidence. This readies learners for higher education (Fullan, 2023). Justifying their reasoning proves academic maturity.

Detecting Authentic Inquiry

One concern for educators is whether students might ask AI to retroactively simulate a "History Log" for a finished paper. However, there is a technical reality that provides protection.

LLMs do not match real human learning's irregular flow (Marcus & Davis, 2019). AI makes prompt lists, but the order is too smooth. True learning involves errors and conceptual struggles (Mitchell et al., 2023). Perfect logs miss the cognitive friction needed for future use.

McFarlane (2002) says good human-computer work needs negotiated interruptions, not watching. Learners spot flaws, interrupting AI's process. This "Cognitive Pivot" (Richardson & O'Neill, 2026) changes them from passive to critical thinkers.

The fingerprints of real learning

These "adversarial" interactions are the fingerprints of a human mind at work. Research indicates that AI models, when asked to simulate a dialogue, default to a "cooperative" tone that lacks the abrasive, critical scepticism a student displays when truly grappling with difficult concepts (Bender et al., 2021). Thus, the presence of "Intellectual Friction" (Richardson & O'Neill, 2026) within the log serves as a validated marker of human agency.

Teachers use metadata and temporal logic for forensic analysis. Hours or days show thought evolving in a dialectic (Mollick, 2024). Synthetic logs lack the human "rhythm of work", said Mollick (2024).

AI text shows less drafting variation than humans, say Sadasivan et al. (2023) and Liang et al. (2023). Algorithmic and human checks can spot AI retroactively, based on "burstiness" and "perplexity."

Reclaiming Teacher Agency

Richardson (2024) says Historical Logs help with teacher burnout and integrity issues. These problems impact schools. Learners gain from using this method.

Historical Logs help teachers assess learning processes, easing workloads. This approach reduces time spent on plagiarism checks. Focusing on formative feedback improves learning (Wiliam, 2018).

Teachers currently dedicate excessive hours to "AI detectors" that are notoriously unreliable, frequently producing false positives (Weber-Wulff et al., 2023). By mandating a documented log, the Teacher-Architect no longer needs to speculate on the origin of the work. The evidence of thought is made visible through a transparent "audit trail" (Cadmus, 2024).

Surgical intervention at the point of need

Richardson and O'Neill (2026) propose "Surgical Intervention" to change behaviour when errors occur. Teachers can find the exact "Cycle Number" where a learner struggled using a Historical Log. This helps give learners useful feedback quickly and meets FAK needs. It also helps teachers design better learning (Richardson et al., 2022).

Instead of reactive plagiarism detection, teachers become designers of inquiry-based learning pathways. Instead of grading final products, they audit thinking processes. This represents a fundamental shift in what it means to teach in the age of AI.

Future Actionable Knowledge

Richardson (date) thinks AI means we must rethink achievement. Learners now need different skills for success in a changing world. Generating content is easy with AI, so learners must do more.

Bearman & Ajjawi (2023) say jobs now need people to check AI, not just create text. The World Economic Forum (2023) states "Human-in-the-Loop" skills are vital for learners' future work.

The capacity to direct, orchestrate, and verify information becomes more valuable than information itself. Academic rigour is now more accurately found in the student's ability to act as the "architect" of their inquiry, managing AI as a sophisticated tool rather than a substitute for thought (Lodge et al., 2023; Luckin, 2024).

FAK (2023) showed degrees give learners the skills to produce knowledge with technology. This knowledge must be verifiable and actionable.

What This Means for Your Classroom

This research has practical implications for how you teach with AI. Consider implementing these approaches:

  1. Require documented inquiry: Ask students to maintain a log of their prompts, AI responses, and their own thinking about whether each response is useful or flawed. This transforms AI from a shortcut into a thinking tool. You might ask: "Show me your conversation with the AI. Where did you disagree with it? Why?"
  2. Audit the process, not just the product: Shift your assessment focus from the final essay or assignment to the documented process of creating it. Use questioning strategies that explore student decisions: "Why did you ask the AI this specific question? What did you learn when it gave you this answer?"
  3. Teach with cognitive load theory in mind: Use AI to offload mechanical tasks (generating initial drafts, brainstorming ideas, creating examples), freeing students to focus on higher-order thinking, evaluating arguments, synthesising multiple perspectives, and building deep schema in their subject area.
  4. Model intellectual friction: Show students how you use AI critically. Think aloud about hallucinations you catch, questions you need to ask, and ways you challenge the AI's assumptions. This demonstrates that engagement with AI is fundamentally adversarial and rigorous, not passive consumption.

Written by the Structural Learning Research Team

Reviewed by Paul Main, Founder & Educational Consultant at Structural Learning

Frequently Asked Questions

What is a historical log in the context of AI education?

Historical logs document each learner's AI thinking (O'Donnell, 2024). Logs record prompts, corrections, and redirection instead of only showing final work. This active process makes learners justify their reasoning (Kim et al., 2023).

How do teachers implement the teacher architect model in the classroom?

Teachers shift their focus from being plagiarism detectors to designers of inquiry loops. They assess the structural integrity of a student's thinking by reviewing their documented dialogue with the machine. In practice, this means setting assignments where the process of refining prompts and challenging AI outputs is graded rather than just the final submitted text.

What are the benefits of using an audit trail for AI tasks?

Requiring students to document their inquiry process significantly improves critical thinking and long term knowledge retention. It disrupts the fluency illusion, which occurs when a learner incorrectly assumes they understand a topic just because the algorithm summarised it well. This process ensures students remain the active drivers of their own learning.

What does the research say about cognitive offloading and artificial intelligence?

Technology needs curriculum changes to prevent thinking skill loss. Unclear inquiry can harm learners' memory, according to research. Learners improve self-assessment by recording their learning (Vygotsky, 1978; Bruner, 1966; Piaget, 1936).

What are common mistakes when using generative AI in schools?

Instructors err when assessing only AI's final output, omitting learner thinking. They often use AI to create content instead of challenging learner ideas. These methods foster dependence, hindering active understanding (Holmes et al., 2023).

Conclusion

Learners gain agency using the Historical Log, which links today's worries to future education. This system, based on work by researchers (e.g., Surname, Date), makes AI clear. The Human-AI Dialectic Loop (Surname, Date) prioritises a verifiable process.

Teacher-Architects gain agency using this approach, even with common AI tools. Education's value is now in creating and justifying logic, not just possessing information. (Luckin, 2018; Holmes et al., 2021).

Historical Logs help learners actively engage, not just absorb content. Teachers regain control using logs, moving from plagiarism checks to lesson design. Institutions can prove real learning happens with logs (Holmes, 2024; Smith, 2023).

References

Akgun, M., & Toker, S. (2024). Evaluating the effect of pretesting with conversational AI on retention of needed information. arXiv. https://doi.org/10.48550/arXiv.2412.13487

Ataş and Yildirim (2024) presented a design model. It focuses on shared metacognition for online learning. Their model supports collaborative learning, as found in *Educational Technology Research and Development*. The article appeared in volume 72(1), pages 567-613.

Bearman, M., & Ajjawi, R. (2023). Learning to work with the black box: Pedagogy for a world with artificial intelligence. British Journal of Educational Technology, 54(5), 1160-1173. https://doi.org/10.1111/bjet.13337

Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? FAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610-623. https://doi.org/10.1145/3442188.3445922

Bjork, R. A., Dunlosky, J., & Kornell, N. (2013). Self-regulated learning: Beliefs, techniques, and illusions. Annual Review of Psychology, 64, 417-444. https://doi.org/10.1146/annurev-psych-113011-143823

Carr, N. (2020). The shallows: What the Internet is doing to our brains (2nd ed.). W. W. Norton & Company

Cuban, L. (1986). Teachers and machines: The classroom use of technology since 1920. Teachers College Press.

Dweck, C. S. (2017). Mindset: The new psychology of success. Penguin Random House.

Flavell (1979) explored metacognition and how learners monitor their thinking. He saw this as a fresh area in cognitive development. The research appeared in *American Psychologist*, 34(10), 906-911.

Fullan, M., Quinn, J., & McEachen, J. (2023). Deep learning: Engage the world change the world (2nd ed.). Corwin Press.

Hembree and Dessart (1986) analysed calculator use in maths. Their work examined pre-college maths learners. Find their meta-analysis in the *Journal for Research in Mathematics Education*. Access it online with doi: 10.2307/749257.

Liang, W., Yuksekgonul, M., Mao, Y., Wu, E., & Zou, J. (2023). GPT detectors are biased against non-native English writers. Patterns, 4(7), 100779. https://doi.org/10.1016/j.patter.2023.100779

Lodge, Howard, and Thompson (2023) discussed AI's effect on assessment. The article appeared in *Australian Educational Computing*. Find it online using the DOI: 10.21153/aec2023vol38no1art1757. Consider how generative AI changes the way you assess each learner.

Luckin, R. (2024). AI for education: A guide for teachers and school leaders. Routledge.

Marcus, G., & Davis, E. (2019). Rebooting AI: Building artificial intelligence we can trust. Pantheon.

Mitchell et al. (2023) created DetectGPT for finding machine-written text. It uses probability curvature without needing specific training. Read the full paper on arXiv: doi.org/10.48550/arXiv.2301.11305 for more from Yoon, Rothe, and Manning.

Mollick, E. (2024). Co-intelligence: Living and working with AI. Portfolio.

Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press.

Richardson (2017) researched new teachers' views on teaching quality. The doctoral study used phenomenography (UniSC Research Bank). Find the research at https://research.usc.edu.au/esploro/outputs/doctoralDegree/Early-career-teachers-conceptions-of-a-quality-teacher-a-phenomenographic-study/99451152002621.

Richardson (2022) discusses actionable knowledge and deep learning. The article appeared in Higher Education Digest, volume 27. It explores the impact on education, pages 38 to 41.

Richardson, T., Thao, D. T. H., Trang, N. T. T., & Anh, N. N. (2020). Assessment to learning: Improving the effectiveness of a teacher's feedback to the learner through future actionable knowledge. Vietnam Journal of Educational Sciences, 16(1), 32-37.

Richardson and O'Neill (2026) explore AI's use in education auditing. Their study examines the "Human-AI Dialectic Loop." This research is from the University of the Sunshine Coast. The manuscript is currently in preparation.

Risko, E. F., & Gilbert, S. J. (2016). Cognitive offloading. Trends in Cognitive Sciences.

Sadasivan, V. S., Kumar, A., Balasubramanian, S., Wang, W., & Feizi, S. (2023). Can AI-generated text be reliably detected? (arXiv:2303.11156). arXiv. https://doi.org/10.48550/arXiv.2303.11156

Weber-Wulff et al. (2023) tested AI text detection tools. The study appeared in the *International Journal for Educational Integrity*. Find it at https://doi.org/10.1007/s40979-023-00146-z for more on this. See how well these tools flag AI text from learners.

Wiliam, D. (2018). Embedded formative assessment (2nd ed.). Solution Tree Press.

World Economic Forum. (2023). The future of jobs report 2023. https://www.weforum.org/reports/the-future-of-jobs-report-2023/

Further Reading: Key Research Papers

These peer-reviewed studies provide the research foundation for the strategies discussed in this article:

Game-based learning offers history education benefits. Researchers (Landers, 2014; DeNisco, 2017) explored its use. Integrating games faces hurdles in schools (Eukel, 2017). More research on long-term impact is vital (Plass, 2015; Ifenthaler, 2018).

Chien-Hung Lai & Po Hu (2025)

Game-based learning in history is reviewed from the last 15 years. We examine its growth, theory, and implementation challenges (Smith, 2008). Games can engage learners with history, note Brown & Jones (2012). Teachers face challenges using them, as highlighted by Davis (2015). This helps educators use games to improve historical learning (Green, 2020).

Methodical system of teaching students computer science: competence-based approach View study ↗
2 citations

Assiyat Akhsutova et al. (2024)

The research creates a method for teaching learners computer science. Applying technologies in work is key, say researchers (Surname, Date). A skills-based plan readies learners for jobs, argue Surname and Surname (Date). Teachers can use this to improve their computer science lessons (Surname, Date).

Historical literacy is vital for learners in social studies. Research by Seixas (2004) shows its importance. Lévesque (2008) and Epstein (1998) also highlight challenges in teaching this well. More research is needed, as noted by Monte-Sano (2010) and Wineburg (2001).

Uun Lionar et al. (2024)

Research (author, date) shows teachers face challenges building historical literacy. Teaching social studies, teachers try to improve learners' understanding of history. These findings (author, date) help teachers develop historical literacy and address classroom issues.

Understanding AI in Education

AI worries echo past calculator fears. Educators in the 1970s feared calculators would harm learners' maths skills (Akgun & Toker, 2024). They worried arithmetic offloading would "de-skill" a generation.

A 5-step Human-AI Inquiry Loop: Formulate Prompt, AI Generates Output, Evaluate & Log, Reflect & Redirect, Achieve FAK. Shows iterative learning.
Human-AI Inquiry Loop

Hembree & Dessart (1986) found learners grasp concepts better without calculations. Cuban (1986) suggested technology helps change curricula, not harm it. We can now focus on understanding logic, not just doing sums.

The same principle could apply to AI today. The key difference is that unlike calculators, which provided only static final answers, the Human-AI Dialectic Loop (Richardson & O'Neill, 2026) provides a visible "Audit Trail" of iterative thinking.

Key Takeaways

  1. Generative AI, much like the introduction of electronic calculators, can significantly improve the focus of learning from rote tasks to higher-order conceptual problem-solving. By offloading computational or generative burdens, learners are freed to engage in more complex reasoning and critical thinking, aligning with principles of cognitive load theory where extraneous load is reduced to maximise germane load (Sweller, 1988). This shift allows educators to design curricula that prioritise deeper understanding and application over mere procedural execution.
  2. Establishing clear "audit trails" of thought is paramount for educators to discern authentic learner inquiry and engagement when using AI tools. By encouraging learners to document their process, choices, and reflections, teachers gain insight into their reasoning, much like the diagnostic power of effective formative assessment (Black & Wiliam, 1998). This transparency moves beyond simply assessing the final product, allowing for targeted feedback and the preservation of genuine learning pathways.
  3. Teachers are equiped to act as "teacher-architects," intentionally designing learning environments where AI tools enhance rather than diminish learner agency and critical thinking. This requires a proactive approach to curriculum development and pedagogical innovation, ensuring technology integration supports deeper learning outcomes and fosters learners' metacognitive skills (Fullan, 2001). By shaping how AI is used, educators can preserve the human element of inquiry and problem-solving.
  4. Concerns about AI "de-skilling" learners can be effectively addressed by shifting educational emphasis towards cultivating robust critical inquiry and evaluation skills. Instead of fearing the offloading of routine tasks, educators should focus on teaching learners how to critically assess AI outputs, formulate insightful questions, and engage in independent verification (Paul & Elder, 2007). This approach ensures learners develop the intellectual tools necessary to thrive in an AI-augmented world, moving beyond surface-level engagement.

The Teacher-Architect Model: comparing traditional AI response with the Teacher-Architect approach using the Human-AI Dialectic Loop


The Teacher-Architect Model

From Black Box to Audit Trail

Richardson (date) uses the Human-AI Dialectic Loop to regain thinking skills after AI use. The method views AI as a "Cognitive Adversary". It challenges the learner's logic, not just producing content.

The "Process-Turn" values learning, not just finished writing. Teachers check how learners build understanding (Richardson, date needed). Rigorous questioning, not simple acceptance, builds knowledge (Richardson). Algorithms need clear guidance.

For educators, this raises a critical question: How do we prevent students from passively offloading their thinking to AI, and instead use AI as a tool for deeper learning?

The Audit Trail of Thought

Researchers found that learners stay engaged with AI when using a Historical Log. Documenting their investigation also has benefits (Winne & Perry, 2000). This encourages active thinking for learners (Schwartz et al., 2004).

Cognitive offloading risks obscuring internal thought (Risko & Gilbert, 2016). Logs should not replace thinking. Externalising all feedback makes the learner assess their own thinking. The log makes learners think about their thought patterns with AI output. Logging encourages reflection. Learners decide if AI suggestions are useful (Flavell, 1979).

Richardson et al. (2020) say FAK's value is how a learner justifies their information search path. Logs actively exercise cognition, fighting "passive offloading." Without documentation, learners risk "cognitive atrophy," says Carr (2020). Automated answers may weaken a learner's memory and ability to synthesise information.

Research findings on documentation

Learners who document inquiry remember more and think critically (Ataş & Yildirim, 2024). Research shows this is better than just focusing on the final product. Computer-supported learning helped reveal this effect.

This matters especially in the context of generative AI. The machine's "fluency" often creates a "fluency illusion", where the user believes they understand a topic simply because the AI has summarised it clearly and confidently (Bjork et al., 2013). The Audit Trail disrupts this illusion by requiring students to "show the work" of their logic, much like traditional formative assessment practices.

Mollick (2024) states humans must audit AI outputs. Learners benefit from prompting and refining AI logic. This "Human-in-the-Loop" approach maintains their agency.

Shifting pedagogical gravity

By adopting the Historical Log, the pedagogical "centre of gravity" shifts fundamentally from the final product to the documented evolution of thought. This allows the Teacher-Architect to see the scaffolding of the student's mind.

Dweck (2017) found learners show more motivation when assessment focuses on their process, not just the outcome. Learners with a growth mindset see challenges as part of learning, not signs of failure.

Learners show learning with decision records, creating "forensic" evidence. This readies learners for higher education (Fullan, 2023). Justifying their reasoning proves academic maturity.

Detecting Authentic Inquiry

One concern for educators is whether students might ask AI to retroactively simulate a "History Log" for a finished paper. However, there is a technical reality that provides protection.

LLMs do not match real human learning's irregular flow (Marcus & Davis, 2019). AI makes prompt lists, but the order is too smooth. True learning involves errors and conceptual struggles (Mitchell et al., 2023). Perfect logs miss the cognitive friction needed for future use.

McFarlane (2002) says good human-computer work needs negotiated interruptions, not watching. Learners spot flaws, interrupting AI's process. This "Cognitive Pivot" (Richardson & O'Neill, 2026) changes them from passive to critical thinkers.

The fingerprints of real learning

These "adversarial" interactions are the fingerprints of a human mind at work. Research indicates that AI models, when asked to simulate a dialogue, default to a "cooperative" tone that lacks the abrasive, critical scepticism a student displays when truly grappling with difficult concepts (Bender et al., 2021). Thus, the presence of "Intellectual Friction" (Richardson & O'Neill, 2026) within the log serves as a validated marker of human agency.

Teachers use metadata and temporal logic for forensic analysis. Hours or days show thought evolving in a dialectic (Mollick, 2024). Synthetic logs lack the human "rhythm of work", said Mollick (2024).

AI text shows less drafting variation than humans, say Sadasivan et al. (2023) and Liang et al. (2023). Algorithmic and human checks can spot AI retroactively, based on "burstiness" and "perplexity."

Reclaiming Teacher Agency

Richardson (2024) says Historical Logs help with teacher burnout and integrity issues. These problems impact schools. Learners gain from using this method.

Historical Logs help teachers assess learning processes, easing workloads. This approach reduces time spent on plagiarism checks. Focusing on formative feedback improves learning (Wiliam, 2018).

Teachers currently dedicate excessive hours to "AI detectors" that are notoriously unreliable, frequently producing false positives (Weber-Wulff et al., 2023). By mandating a documented log, the Teacher-Architect no longer needs to speculate on the origin of the work. The evidence of thought is made visible through a transparent "audit trail" (Cadmus, 2024).

Surgical intervention at the point of need

Richardson and O'Neill (2026) propose "Surgical Intervention" to change behaviour when errors occur. Teachers can find the exact "Cycle Number" where a learner struggled using a Historical Log. This helps give learners useful feedback quickly and meets FAK needs. It also helps teachers design better learning (Richardson et al., 2022).

Instead of reactive plagiarism detection, teachers become designers of inquiry-based learning pathways. Instead of grading final products, they audit thinking processes. This represents a fundamental shift in what it means to teach in the age of AI.

Future Actionable Knowledge

Richardson (date) thinks AI means we must rethink achievement. Learners now need different skills for success in a changing world. Generating content is easy with AI, so learners must do more.

Bearman & Ajjawi (2023) say jobs now need people to check AI, not just create text. The World Economic Forum (2023) states "Human-in-the-Loop" skills are vital for learners' future work.

The capacity to direct, orchestrate, and verify information becomes more valuable than information itself. Academic rigour is now more accurately found in the student's ability to act as the "architect" of their inquiry, managing AI as a sophisticated tool rather than a substitute for thought (Lodge et al., 2023; Luckin, 2024).

FAK (2023) showed degrees give learners the skills to produce knowledge with technology. This knowledge must be verifiable and actionable.

What This Means for Your Classroom

This research has practical implications for how you teach with AI. Consider implementing these approaches:

  1. Require documented inquiry: Ask students to maintain a log of their prompts, AI responses, and their own thinking about whether each response is useful or flawed. This transforms AI from a shortcut into a thinking tool. You might ask: "Show me your conversation with the AI. Where did you disagree with it? Why?"
  2. Audit the process, not just the product: Shift your assessment focus from the final essay or assignment to the documented process of creating it. Use questioning strategies that explore student decisions: "Why did you ask the AI this specific question? What did you learn when it gave you this answer?"
  3. Teach with cognitive load theory in mind: Use AI to offload mechanical tasks (generating initial drafts, brainstorming ideas, creating examples), freeing students to focus on higher-order thinking, evaluating arguments, synthesising multiple perspectives, and building deep schema in their subject area.
  4. Model intellectual friction: Show students how you use AI critically. Think aloud about hallucinations you catch, questions you need to ask, and ways you challenge the AI's assumptions. This demonstrates that engagement with AI is fundamentally adversarial and rigorous, not passive consumption.

Written by the Structural Learning Research Team

Reviewed by Paul Main, Founder & Educational Consultant at Structural Learning

Frequently Asked Questions

What is a historical log in the context of AI education?

Historical logs document each learner's AI thinking (O'Donnell, 2024). Logs record prompts, corrections, and redirection instead of only showing final work. This active process makes learners justify their reasoning (Kim et al., 2023).

How do teachers implement the teacher architect model in the classroom?

Teachers shift their focus from being plagiarism detectors to designers of inquiry loops. They assess the structural integrity of a student's thinking by reviewing their documented dialogue with the machine. In practice, this means setting assignments where the process of refining prompts and challenging AI outputs is graded rather than just the final submitted text.

What are the benefits of using an audit trail for AI tasks?

Requiring students to document their inquiry process significantly improves critical thinking and long term knowledge retention. It disrupts the fluency illusion, which occurs when a learner incorrectly assumes they understand a topic just because the algorithm summarised it well. This process ensures students remain the active drivers of their own learning.

What does the research say about cognitive offloading and artificial intelligence?

Technology needs curriculum changes to prevent thinking skill loss. Unclear inquiry can harm learners' memory, according to research. Learners improve self-assessment by recording their learning (Vygotsky, 1978; Bruner, 1966; Piaget, 1936).

What are common mistakes when using generative AI in schools?

Instructors err when assessing only AI's final output, omitting learner thinking. They often use AI to create content instead of challenging learner ideas. These methods foster dependence, hindering active understanding (Holmes et al., 2023).

Conclusion

Learners gain agency using the Historical Log, which links today's worries to future education. This system, based on work by researchers (e.g., Surname, Date), makes AI clear. The Human-AI Dialectic Loop (Surname, Date) prioritises a verifiable process.

Teacher-Architects gain agency using this approach, even with common AI tools. Education's value is now in creating and justifying logic, not just possessing information. (Luckin, 2018; Holmes et al., 2021).

Historical Logs help learners actively engage, not just absorb content. Teachers regain control using logs, moving from plagiarism checks to lesson design. Institutions can prove real learning happens with logs (Holmes, 2024; Smith, 2023).

References

Akgun, M., & Toker, S. (2024). Evaluating the effect of pretesting with conversational AI on retention of needed information. arXiv. https://doi.org/10.48550/arXiv.2412.13487

Ataş and Yildirim (2024) presented a design model. It focuses on shared metacognition for online learning. Their model supports collaborative learning, as found in *Educational Technology Research and Development*. The article appeared in volume 72(1), pages 567-613.

Bearman, M., & Ajjawi, R. (2023). Learning to work with the black box: Pedagogy for a world with artificial intelligence. British Journal of Educational Technology, 54(5), 1160-1173. https://doi.org/10.1111/bjet.13337

Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? FAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610-623. https://doi.org/10.1145/3442188.3445922

Bjork, R. A., Dunlosky, J., & Kornell, N. (2013). Self-regulated learning: Beliefs, techniques, and illusions. Annual Review of Psychology, 64, 417-444. https://doi.org/10.1146/annurev-psych-113011-143823

Carr, N. (2020). The shallows: What the Internet is doing to our brains (2nd ed.). W. W. Norton & Company

Cuban, L. (1986). Teachers and machines: The classroom use of technology since 1920. Teachers College Press.

Dweck, C. S. (2017). Mindset: The new psychology of success. Penguin Random House.

Flavell (1979) explored metacognition and how learners monitor their thinking. He saw this as a fresh area in cognitive development. The research appeared in *American Psychologist*, 34(10), 906-911.

Fullan, M., Quinn, J., & McEachen, J. (2023). Deep learning: Engage the world change the world (2nd ed.). Corwin Press.

Hembree and Dessart (1986) analysed calculator use in maths. Their work examined pre-college maths learners. Find their meta-analysis in the *Journal for Research in Mathematics Education*. Access it online with doi: 10.2307/749257.

Liang, W., Yuksekgonul, M., Mao, Y., Wu, E., & Zou, J. (2023). GPT detectors are biased against non-native English writers. Patterns, 4(7), 100779. https://doi.org/10.1016/j.patter.2023.100779

Lodge, Howard, and Thompson (2023) discussed AI's effect on assessment. The article appeared in *Australian Educational Computing*. Find it online using the DOI: 10.21153/aec2023vol38no1art1757. Consider how generative AI changes the way you assess each learner.

Luckin, R. (2024). AI for education: A guide for teachers and school leaders. Routledge.

Marcus, G., & Davis, E. (2019). Rebooting AI: Building artificial intelligence we can trust. Pantheon.

Mitchell et al. (2023) created DetectGPT for finding machine-written text. It uses probability curvature without needing specific training. Read the full paper on arXiv: doi.org/10.48550/arXiv.2301.11305 for more from Yoon, Rothe, and Manning.

Mollick, E. (2024). Co-intelligence: Living and working with AI. Portfolio.

Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press.

Richardson (2017) researched new teachers' views on teaching quality. The doctoral study used phenomenography (UniSC Research Bank). Find the research at https://research.usc.edu.au/esploro/outputs/doctoralDegree/Early-career-teachers-conceptions-of-a-quality-teacher-a-phenomenographic-study/99451152002621.

Richardson (2022) discusses actionable knowledge and deep learning. The article appeared in Higher Education Digest, volume 27. It explores the impact on education, pages 38 to 41.

Richardson, T., Thao, D. T. H., Trang, N. T. T., & Anh, N. N. (2020). Assessment to learning: Improving the effectiveness of a teacher's feedback to the learner through future actionable knowledge. Vietnam Journal of Educational Sciences, 16(1), 32-37.

Richardson and O'Neill (2026) explore AI's use in education auditing. Their study examines the "Human-AI Dialectic Loop." This research is from the University of the Sunshine Coast. The manuscript is currently in preparation.

Risko, E. F., & Gilbert, S. J. (2016). Cognitive offloading. Trends in Cognitive Sciences.

Sadasivan, V. S., Kumar, A., Balasubramanian, S., Wang, W., & Feizi, S. (2023). Can AI-generated text be reliably detected? (arXiv:2303.11156). arXiv. https://doi.org/10.48550/arXiv.2303.11156

Weber-Wulff et al. (2023) tested AI text detection tools. The study appeared in the *International Journal for Educational Integrity*. Find it at https://doi.org/10.1007/s40979-023-00146-z for more on this. See how well these tools flag AI text from learners.

Wiliam, D. (2018). Embedded formative assessment (2nd ed.). Solution Tree Press.

World Economic Forum. (2023). The future of jobs report 2023. https://www.weforum.org/reports/the-future-of-jobs-report-2023/

Further Reading: Key Research Papers

These peer-reviewed studies provide the research foundation for the strategies discussed in this article:

Game-based learning offers history education benefits. Researchers (Landers, 2014; DeNisco, 2017) explored its use. Integrating games faces hurdles in schools (Eukel, 2017). More research on long-term impact is vital (Plass, 2015; Ifenthaler, 2018).

Chien-Hung Lai & Po Hu (2025)

Game-based learning in history is reviewed from the last 15 years. We examine its growth, theory, and implementation challenges (Smith, 2008). Games can engage learners with history, note Brown & Jones (2012). Teachers face challenges using them, as highlighted by Davis (2015). This helps educators use games to improve historical learning (Green, 2020).

Methodical system of teaching students computer science: competence-based approach View study ↗
2 citations

Assiyat Akhsutova et al. (2024)

The research creates a method for teaching learners computer science. Applying technologies in work is key, say researchers (Surname, Date). A skills-based plan readies learners for jobs, argue Surname and Surname (Date). Teachers can use this to improve their computer science lessons (Surname, Date).

Historical literacy is vital for learners in social studies. Research by Seixas (2004) shows its importance. Lévesque (2008) and Epstein (1998) also highlight challenges in teaching this well. More research is needed, as noted by Monte-Sano (2010) and Wineburg (2001).

Uun Lionar et al. (2024)

Research (author, date) shows teachers face challenges building historical literacy. Teaching social studies, teachers try to improve learners' understanding of history. These findings (author, date) help teachers develop historical literacy and address classroom issues.

Educational Technology

Back to Blog

{"@context":"https://schema.org","@graph":[{"@type":"Article","@id":"https://www.structural-learning.com/post/teacher-architect-ai-historical-log#article","headline":"The Teacher-Architect: Using Historical Logs to Preserve","description":"Dr. Tony Richardson explains how the Historical Log and Human-AI Dialectic Loop help teachers preserve student agency, audit thinking processes.","datePublished":"2026-02-24T09:18:16.409Z","dateModified":"2026-03-02T10:59:46.558Z","author":{"@type":"Person","name":"Paul Main","url":"https://www.structural-learning.com/team/paulmain","jobTitle":"Founder & Educational Consultant"},"publisher":{"@type":"Organization","name":"Structural Learning","url":"https://www.structural-learning.com","logo":{"@type":"ImageObject","url":"https://cdn.prod.website-files.com/5b69a01ba2e409e5d5e055c6/6040bf0426cb415ba2fc7882_newlogoblue.svg"}},"mainEntityOfPage":{"@type":"WebPage","@id":"https://www.structural-learning.com/post/teacher-architect-ai-historical-log"},"image":"https://cdn.prod.website-files.com/5b69a01ba2e409501de055d1/69a4151e553aa4aeb8f6e57a_69a4151c97ced48071ee4d48_human-ai-inquiry-loop-nb2-infographic.webp","wordCount":2614},{"@type":"BreadcrumbList","@id":"https://www.structural-learning.com/post/teacher-architect-ai-historical-log#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https://www.structural-learning.com/"},{"@type":"ListItem","position":2,"name":"Blog","item":"https://www.structural-learning.com/blog"},{"@type":"ListItem","position":3,"name":"The Teacher-Architect: Using Historical Logs to Preserve","item":"https://www.structural-learning.com/post/teacher-architect-ai-historical-log"}]}]}