The Teacher-Architect: Using Historical Logs to Preserve
Dr. Tony Richardson explains how the Historical Log and Human-AI Dialectic Loop help teachers preserve student agency, audit thinking processes.


Dr. Tony Richardson explains how the Historical Log and Human-AI Dialectic Loop help teachers preserve student agency, audit thinking processes.
AI worries echo past calculator fears. Educators in the 1970s feared calculators would harm learners' maths skills (Akgun & Toker, 2024). They worried arithmetic offloading would "de-skill" a generation.

Hembree & Dessart (1986) found learners grasp concepts better without calculations. Cuban (1986) suggested technology helps change curricula, not harm it. We can now focus on understanding logic, not just doing sums.
The same principle could apply to AI today. The key difference is that unlike calculators, which provided only static final answers, the Human-AI Dialectic Loop (Richardson & O'Neill, 2026) provides a visible "Audit Trail" of iterative thinking.

Richardson (date) uses the Human-AI Dialectic Loop to regain thinking skills after AI use. The method views AI as a "Cognitive Adversary". It challenges the learner's logic, not just producing content.
The "Process-Turn" values learning, not just finished writing. Teachers check how learners build understanding (Richardson, date needed). Rigorous questioning, not simple acceptance, builds knowledge (Richardson). Algorithms need clear guidance.
For educators, this raises a critical question: How do we prevent students from passively offloading their thinking to AI, and instead use AI as a tool for deeper learning?
Researchers found that learners stay engaged with AI when using a Historical Log. Documenting their investigation also has benefits (Winne & Perry, 2000). This encourages active thinking for learners (Schwartz et al., 2004).
Cognitive offloading risks obscuring internal thought (Risko & Gilbert, 2016). Logs should not replace thinking. Externalising all feedback makes the learner assess their own thinking. The log makes learners think about their thought patterns with AI output. Logging encourages reflection. Learners decide if AI suggestions are useful (Flavell, 1979).
Richardson et al. (2020) say FAK's value is how a learner justifies their information search path. Logs actively exercise cognition, fighting "passive offloading." Without documentation, learners risk "cognitive atrophy," says Carr (2020). Automated answers may weaken a learner's memory and ability to synthesise information.
Learners who document inquiry remember more and think critically (Ataş & Yildirim, 2024). Research shows this is better than just focusing on the final product. Computer-supported learning helped reveal this effect.
This matters especially in the context of generative AI. The machine's "fluency" often creates a "fluency illusion", where the user believes they understand a topic simply because the AI has summarised it clearly and confidently (Bjork et al., 2013). The Audit Trail disrupts this illusion by requiring students to "show the work" of their logic, much like traditional formative assessment practices.
Mollick (2024) states humans must audit AI outputs. Learners benefit from prompting and refining AI logic. This "Human-in-the-Loop" approach maintains their agency.
By adopting the Historical Log, the pedagogical "centre of gravity" shifts fundamentally from the final product to the documented evolution of thought. This allows the Teacher-Architect to see the scaffolding of the student's mind.
Dweck (2017) found learners show more motivation when assessment focuses on their process, not just the outcome. Learners with a growth mindset see challenges as part of learning, not signs of failure.
Learners show learning with decision records, creating "forensic" evidence. This readies learners for higher education (Fullan, 2023). Justifying their reasoning proves academic maturity.
One concern for educators is whether students might ask AI to retroactively simulate a "History Log" for a finished paper. However, there is a technical reality that provides protection.
LLMs do not match real human learning's irregular flow (Marcus & Davis, 2019). AI makes prompt lists, but the order is too smooth. True learning involves errors and conceptual struggles (Mitchell et al., 2023). Perfect logs miss the cognitive friction needed for future use.
McFarlane (2002) says good human-computer work needs negotiated interruptions, not watching. Learners spot flaws, interrupting AI's process. This "Cognitive Pivot" (Richardson & O'Neill, 2026) changes them from passive to critical thinkers.
These "adversarial" interactions are the fingerprints of a human mind at work. Research indicates that AI models, when asked to simulate a dialogue, default to a "cooperative" tone that lacks the abrasive, critical scepticism a student displays when truly grappling with difficult concepts (Bender et al., 2021). Thus, the presence of "Intellectual Friction" (Richardson & O'Neill, 2026) within the log serves as a validated marker of human agency.
Teachers use metadata and temporal logic for forensic analysis. Hours or days show thought evolving in a dialectic (Mollick, 2024). Synthetic logs lack the human "rhythm of work", said Mollick (2024).
AI text shows less drafting variation than humans, say Sadasivan et al. (2023) and Liang et al. (2023). Algorithmic and human checks can spot AI retroactively, based on "burstiness" and "perplexity."
Richardson (2024) says Historical Logs help with teacher burnout and integrity issues. These problems impact schools. Learners gain from using this method.
Historical Logs help teachers assess learning processes, easing workloads. This approach reduces time spent on plagiarism checks. Focusing on formative feedback improves learning (Wiliam, 2018).
Teachers currently dedicate excessive hours to "AI detectors" that are notoriously unreliable, frequently producing false positives (Weber-Wulff et al., 2023). By mandating a documented log, the Teacher-Architect no longer needs to speculate on the origin of the work. The evidence of thought is made visible through a transparent "audit trail" (Cadmus, 2024).
Richardson and O'Neill (2026) propose "Surgical Intervention" to change behaviour when errors occur. Teachers can find the exact "Cycle Number" where a learner struggled using a Historical Log. This helps give learners useful feedback quickly and meets FAK needs. It also helps teachers design better learning (Richardson et al., 2022).
Instead of reactive plagiarism detection, teachers become designers of inquiry-based learning pathways. Instead of grading final products, they audit thinking processes. This represents a fundamental shift in what it means to teach in the age of AI.
Richardson (date) thinks AI means we must rethink achievement. Learners now need different skills for success in a changing world. Generating content is easy with AI, so learners must do more.
Bearman & Ajjawi (2023) say jobs now need people to check AI, not just create text. The World Economic Forum (2023) states "Human-in-the-Loop" skills are vital for learners' future work.
The capacity to direct, orchestrate, and verify information becomes more valuable than information itself. Academic rigour is now more accurately found in the student's ability to act as the "architect" of their inquiry, managing AI as a sophisticated tool rather than a substitute for thought (Lodge et al., 2023; Luckin, 2024).
FAK (2023) showed degrees give learners the skills to produce knowledge with technology. This knowledge must be verifiable and actionable.
This research has practical implications for how you teach with AI. Consider implementing these approaches:
Historical logs document each learner's AI thinking (O'Donnell, 2024). Logs record prompts, corrections, and redirection instead of only showing final work. This active process makes learners justify their reasoning (Kim et al., 2023).
Teachers shift their focus from being plagiarism detectors to designers of inquiry loops. They assess the structural integrity of a student's thinking by reviewing their documented dialogue with the machine. In practice, this means setting assignments where the process of refining prompts and challenging AI outputs is graded rather than just the final submitted text.
Requiring students to document their inquiry process significantly improves critical thinking and long term knowledge retention. It disrupts the fluency illusion, which occurs when a learner incorrectly assumes they understand a topic just because the algorithm summarised it well. This process ensures students remain the active drivers of their own learning.
Technology needs curriculum changes to prevent thinking skill loss. Unclear inquiry can harm learners' memory, according to research. Learners improve self-assessment by recording their learning (Vygotsky, 1978; Bruner, 1966; Piaget, 1936).
Instructors err when assessing only AI's final output, omitting learner thinking. They often use AI to create content instead of challenging learner ideas. These methods foster dependence, hindering active understanding (Holmes et al., 2023).
Learners gain agency using the Historical Log, which links today's worries to future education. This system, based on work by researchers (e.g., Surname, Date), makes AI clear. The Human-AI Dialectic Loop (Surname, Date) prioritises a verifiable process.
Teacher-Architects gain agency using this approach, even with common AI tools. Education's value is now in creating and justifying logic, not just possessing information. (Luckin, 2018; Holmes et al., 2021).
Historical Logs help learners actively engage, not just absorb content. Teachers regain control using logs, moving from plagiarism checks to lesson design. Institutions can prove real learning happens with logs (Holmes, 2024; Smith, 2023).
Akgun, M., & Toker, S. (2024). Evaluating the effect of pretesting with conversational AI on retention of needed information. arXiv. https://doi.org/10.48550/arXiv.2412.13487
Ataş and Yildirim (2024) presented a design model. It focuses on shared metacognition for online learning. Their model supports collaborative learning, as found in *Educational Technology Research and Development*. The article appeared in volume 72(1), pages 567-613.
Bearman, M., & Ajjawi, R. (2023). Learning to work with the black box: Pedagogy for a world with artificial intelligence. British Journal of Educational Technology, 54(5), 1160-1173. https://doi.org/10.1111/bjet.13337
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? FAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610-623. https://doi.org/10.1145/3442188.3445922
Bjork, R. A., Dunlosky, J., & Kornell, N. (2013). Self-regulated learning: Beliefs, techniques, and illusions. Annual Review of Psychology, 64, 417-444. https://doi.org/10.1146/annurev-psych-113011-143823
Carr, N. (2020). The shallows: What the Internet is doing to our brains (2nd ed.). W. W. Norton & Company
Cuban, L. (1986). Teachers and machines: The classroom use of technology since 1920. Teachers College Press.
Dweck, C. S. (2017). Mindset: The new psychology of success. Penguin Random House.
Flavell (1979) explored metacognition and how learners monitor their thinking. He saw this as a fresh area in cognitive development. The research appeared in *American Psychologist*, 34(10), 906-911.
Fullan, M., Quinn, J., & McEachen, J. (2023). Deep learning: Engage the world change the world (2nd ed.). Corwin Press.
Hembree and Dessart (1986) analysed calculator use in maths. Their work examined pre-college maths learners. Find their meta-analysis in the *Journal for Research in Mathematics Education*. Access it online with doi: 10.2307/749257.
Liang, W., Yuksekgonul, M., Mao, Y., Wu, E., & Zou, J. (2023). GPT detectors are biased against non-native English writers. Patterns, 4(7), 100779. https://doi.org/10.1016/j.patter.2023.100779
Lodge, Howard, and Thompson (2023) discussed AI's effect on assessment. The article appeared in *Australian Educational Computing*. Find it online using the DOI: 10.21153/aec2023vol38no1art1757. Consider how generative AI changes the way you assess each learner.
Luckin, R. (2024). AI for education: A guide for teachers and school leaders. Routledge.
Marcus, G., & Davis, E. (2019). Rebooting AI: Building artificial intelligence we can trust. Pantheon.
Mitchell et al. (2023) created DetectGPT for finding machine-written text. It uses probability curvature without needing specific training. Read the full paper on arXiv: doi.org/10.48550/arXiv.2301.11305 for more from Yoon, Rothe, and Manning.
Mollick, E. (2024). Co-intelligence: Living and working with AI. Portfolio.
Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press.
Richardson (2017) researched new teachers' views on teaching quality. The doctoral study used phenomenography (UniSC Research Bank). Find the research at https://research.usc.edu.au/esploro/outputs/doctoralDegree/Early-career-teachers-conceptions-of-a-quality-teacher-a-phenomenographic-study/99451152002621.
Richardson (2022) discusses actionable knowledge and deep learning. The article appeared in Higher Education Digest, volume 27. It explores the impact on education, pages 38 to 41.
Richardson, T., Thao, D. T. H., Trang, N. T. T., & Anh, N. N. (2020). Assessment to learning: Improving the effectiveness of a teacher's feedback to the learner through future actionable knowledge. Vietnam Journal of Educational Sciences, 16(1), 32-37.
Richardson and O'Neill (2026) explore AI's use in education auditing. Their study examines the "Human-AI Dialectic Loop." This research is from the University of the Sunshine Coast. The manuscript is currently in preparation.
Risko, E. F., & Gilbert, S. J. (2016). Cognitive offloading. Trends in Cognitive Sciences.
Sadasivan, V. S., Kumar, A., Balasubramanian, S., Wang, W., & Feizi, S. (2023). Can AI-generated text be reliably detected? (arXiv:2303.11156). arXiv. https://doi.org/10.48550/arXiv.2303.11156
Weber-Wulff et al. (2023) tested AI text detection tools. The study appeared in the *International Journal for Educational Integrity*. Find it at https://doi.org/10.1007/s40979-023-00146-z for more on this. See how well these tools flag AI text from learners.
Wiliam, D. (2018). Embedded formative assessment (2nd ed.). Solution Tree Press.
World Economic Forum. (2023). The future of jobs report 2023. https://www.weforum.org/reports/the-future-of-jobs-report-2023/
These peer-reviewed studies provide the research foundation for the strategies discussed in this article:
Game-based learning offers history education benefits. Researchers (Landers, 2014; DeNisco, 2017) explored its use. Integrating games faces hurdles in schools (Eukel, 2017). More research on long-term impact is vital (Plass, 2015; Ifenthaler, 2018).
Chien-Hung Lai & Po Hu (2025)
Game-based learning in history is reviewed from the last 15 years. We examine its growth, theory, and implementation challenges (Smith, 2008). Games can engage learners with history, note Brown & Jones (2012). Teachers face challenges using them, as highlighted by Davis (2015). This helps educators use games to improve historical learning (Green, 2020).
Methodical system of teaching students computer science: competence-based approach View study ↗
2 citations
Assiyat Akhsutova et al. (2024)
The research creates a method for teaching learners computer science. Applying technologies in work is key, say researchers (Surname, Date). A skills-based plan readies learners for jobs, argue Surname and Surname (Date). Teachers can use this to improve their computer science lessons (Surname, Date).
Historical literacy is vital for learners in social studies. Research by Seixas (2004) shows its importance. Lévesque (2008) and Epstein (1998) also highlight challenges in teaching this well. More research is needed, as noted by Monte-Sano (2010) and Wineburg (2001).
Uun Lionar et al. (2024)
Research (author, date) shows teachers face challenges building historical literacy. Teaching social studies, teachers try to improve learners' understanding of history. These findings (author, date) help teachers develop historical literacy and address classroom issues.
AI worries echo past calculator fears. Educators in the 1970s feared calculators would harm learners' maths skills (Akgun & Toker, 2024). They worried arithmetic offloading would "de-skill" a generation.

Hembree & Dessart (1986) found learners grasp concepts better without calculations. Cuban (1986) suggested technology helps change curricula, not harm it. We can now focus on understanding logic, not just doing sums.
The same principle could apply to AI today. The key difference is that unlike calculators, which provided only static final answers, the Human-AI Dialectic Loop (Richardson & O'Neill, 2026) provides a visible "Audit Trail" of iterative thinking.

Richardson (date) uses the Human-AI Dialectic Loop to regain thinking skills after AI use. The method views AI as a "Cognitive Adversary". It challenges the learner's logic, not just producing content.
The "Process-Turn" values learning, not just finished writing. Teachers check how learners build understanding (Richardson, date needed). Rigorous questioning, not simple acceptance, builds knowledge (Richardson). Algorithms need clear guidance.
For educators, this raises a critical question: How do we prevent students from passively offloading their thinking to AI, and instead use AI as a tool for deeper learning?
Researchers found that learners stay engaged with AI when using a Historical Log. Documenting their investigation also has benefits (Winne & Perry, 2000). This encourages active thinking for learners (Schwartz et al., 2004).
Cognitive offloading risks obscuring internal thought (Risko & Gilbert, 2016). Logs should not replace thinking. Externalising all feedback makes the learner assess their own thinking. The log makes learners think about their thought patterns with AI output. Logging encourages reflection. Learners decide if AI suggestions are useful (Flavell, 1979).
Richardson et al. (2020) say FAK's value is how a learner justifies their information search path. Logs actively exercise cognition, fighting "passive offloading." Without documentation, learners risk "cognitive atrophy," says Carr (2020). Automated answers may weaken a learner's memory and ability to synthesise information.
Learners who document inquiry remember more and think critically (Ataş & Yildirim, 2024). Research shows this is better than just focusing on the final product. Computer-supported learning helped reveal this effect.
This matters especially in the context of generative AI. The machine's "fluency" often creates a "fluency illusion", where the user believes they understand a topic simply because the AI has summarised it clearly and confidently (Bjork et al., 2013). The Audit Trail disrupts this illusion by requiring students to "show the work" of their logic, much like traditional formative assessment practices.
Mollick (2024) states humans must audit AI outputs. Learners benefit from prompting and refining AI logic. This "Human-in-the-Loop" approach maintains their agency.
By adopting the Historical Log, the pedagogical "centre of gravity" shifts fundamentally from the final product to the documented evolution of thought. This allows the Teacher-Architect to see the scaffolding of the student's mind.
Dweck (2017) found learners show more motivation when assessment focuses on their process, not just the outcome. Learners with a growth mindset see challenges as part of learning, not signs of failure.
Learners show learning with decision records, creating "forensic" evidence. This readies learners for higher education (Fullan, 2023). Justifying their reasoning proves academic maturity.
One concern for educators is whether students might ask AI to retroactively simulate a "History Log" for a finished paper. However, there is a technical reality that provides protection.
LLMs do not match real human learning's irregular flow (Marcus & Davis, 2019). AI makes prompt lists, but the order is too smooth. True learning involves errors and conceptual struggles (Mitchell et al., 2023). Perfect logs miss the cognitive friction needed for future use.
McFarlane (2002) says good human-computer work needs negotiated interruptions, not watching. Learners spot flaws, interrupting AI's process. This "Cognitive Pivot" (Richardson & O'Neill, 2026) changes them from passive to critical thinkers.
These "adversarial" interactions are the fingerprints of a human mind at work. Research indicates that AI models, when asked to simulate a dialogue, default to a "cooperative" tone that lacks the abrasive, critical scepticism a student displays when truly grappling with difficult concepts (Bender et al., 2021). Thus, the presence of "Intellectual Friction" (Richardson & O'Neill, 2026) within the log serves as a validated marker of human agency.
Teachers use metadata and temporal logic for forensic analysis. Hours or days show thought evolving in a dialectic (Mollick, 2024). Synthetic logs lack the human "rhythm of work", said Mollick (2024).
AI text shows less drafting variation than humans, say Sadasivan et al. (2023) and Liang et al. (2023). Algorithmic and human checks can spot AI retroactively, based on "burstiness" and "perplexity."
Richardson (2024) says Historical Logs help with teacher burnout and integrity issues. These problems impact schools. Learners gain from using this method.
Historical Logs help teachers assess learning processes, easing workloads. This approach reduces time spent on plagiarism checks. Focusing on formative feedback improves learning (Wiliam, 2018).
Teachers currently dedicate excessive hours to "AI detectors" that are notoriously unreliable, frequently producing false positives (Weber-Wulff et al., 2023). By mandating a documented log, the Teacher-Architect no longer needs to speculate on the origin of the work. The evidence of thought is made visible through a transparent "audit trail" (Cadmus, 2024).
Richardson and O'Neill (2026) propose "Surgical Intervention" to change behaviour when errors occur. Teachers can find the exact "Cycle Number" where a learner struggled using a Historical Log. This helps give learners useful feedback quickly and meets FAK needs. It also helps teachers design better learning (Richardson et al., 2022).
Instead of reactive plagiarism detection, teachers become designers of inquiry-based learning pathways. Instead of grading final products, they audit thinking processes. This represents a fundamental shift in what it means to teach in the age of AI.
Richardson (date) thinks AI means we must rethink achievement. Learners now need different skills for success in a changing world. Generating content is easy with AI, so learners must do more.
Bearman & Ajjawi (2023) say jobs now need people to check AI, not just create text. The World Economic Forum (2023) states "Human-in-the-Loop" skills are vital for learners' future work.
The capacity to direct, orchestrate, and verify information becomes more valuable than information itself. Academic rigour is now more accurately found in the student's ability to act as the "architect" of their inquiry, managing AI as a sophisticated tool rather than a substitute for thought (Lodge et al., 2023; Luckin, 2024).
FAK (2023) showed degrees give learners the skills to produce knowledge with technology. This knowledge must be verifiable and actionable.
This research has practical implications for how you teach with AI. Consider implementing these approaches:
Historical logs document each learner's AI thinking (O'Donnell, 2024). Logs record prompts, corrections, and redirection instead of only showing final work. This active process makes learners justify their reasoning (Kim et al., 2023).
Teachers shift their focus from being plagiarism detectors to designers of inquiry loops. They assess the structural integrity of a student's thinking by reviewing their documented dialogue with the machine. In practice, this means setting assignments where the process of refining prompts and challenging AI outputs is graded rather than just the final submitted text.
Requiring students to document their inquiry process significantly improves critical thinking and long term knowledge retention. It disrupts the fluency illusion, which occurs when a learner incorrectly assumes they understand a topic just because the algorithm summarised it well. This process ensures students remain the active drivers of their own learning.
Technology needs curriculum changes to prevent thinking skill loss. Unclear inquiry can harm learners' memory, according to research. Learners improve self-assessment by recording their learning (Vygotsky, 1978; Bruner, 1966; Piaget, 1936).
Instructors err when assessing only AI's final output, omitting learner thinking. They often use AI to create content instead of challenging learner ideas. These methods foster dependence, hindering active understanding (Holmes et al., 2023).
Learners gain agency using the Historical Log, which links today's worries to future education. This system, based on work by researchers (e.g., Surname, Date), makes AI clear. The Human-AI Dialectic Loop (Surname, Date) prioritises a verifiable process.
Teacher-Architects gain agency using this approach, even with common AI tools. Education's value is now in creating and justifying logic, not just possessing information. (Luckin, 2018; Holmes et al., 2021).
Historical Logs help learners actively engage, not just absorb content. Teachers regain control using logs, moving from plagiarism checks to lesson design. Institutions can prove real learning happens with logs (Holmes, 2024; Smith, 2023).
Akgun, M., & Toker, S. (2024). Evaluating the effect of pretesting with conversational AI on retention of needed information. arXiv. https://doi.org/10.48550/arXiv.2412.13487
Ataş and Yildirim (2024) presented a design model. It focuses on shared metacognition for online learning. Their model supports collaborative learning, as found in *Educational Technology Research and Development*. The article appeared in volume 72(1), pages 567-613.
Bearman, M., & Ajjawi, R. (2023). Learning to work with the black box: Pedagogy for a world with artificial intelligence. British Journal of Educational Technology, 54(5), 1160-1173. https://doi.org/10.1111/bjet.13337
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? FAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610-623. https://doi.org/10.1145/3442188.3445922
Bjork, R. A., Dunlosky, J., & Kornell, N. (2013). Self-regulated learning: Beliefs, techniques, and illusions. Annual Review of Psychology, 64, 417-444. https://doi.org/10.1146/annurev-psych-113011-143823
Carr, N. (2020). The shallows: What the Internet is doing to our brains (2nd ed.). W. W. Norton & Company
Cuban, L. (1986). Teachers and machines: The classroom use of technology since 1920. Teachers College Press.
Dweck, C. S. (2017). Mindset: The new psychology of success. Penguin Random House.
Flavell (1979) explored metacognition and how learners monitor their thinking. He saw this as a fresh area in cognitive development. The research appeared in *American Psychologist*, 34(10), 906-911.
Fullan, M., Quinn, J., & McEachen, J. (2023). Deep learning: Engage the world change the world (2nd ed.). Corwin Press.
Hembree and Dessart (1986) analysed calculator use in maths. Their work examined pre-college maths learners. Find their meta-analysis in the *Journal for Research in Mathematics Education*. Access it online with doi: 10.2307/749257.
Liang, W., Yuksekgonul, M., Mao, Y., Wu, E., & Zou, J. (2023). GPT detectors are biased against non-native English writers. Patterns, 4(7), 100779. https://doi.org/10.1016/j.patter.2023.100779
Lodge, Howard, and Thompson (2023) discussed AI's effect on assessment. The article appeared in *Australian Educational Computing*. Find it online using the DOI: 10.21153/aec2023vol38no1art1757. Consider how generative AI changes the way you assess each learner.
Luckin, R. (2024). AI for education: A guide for teachers and school leaders. Routledge.
Marcus, G., & Davis, E. (2019). Rebooting AI: Building artificial intelligence we can trust. Pantheon.
Mitchell et al. (2023) created DetectGPT for finding machine-written text. It uses probability curvature without needing specific training. Read the full paper on arXiv: doi.org/10.48550/arXiv.2301.11305 for more from Yoon, Rothe, and Manning.
Mollick, E. (2024). Co-intelligence: Living and working with AI. Portfolio.
Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press.
Richardson (2017) researched new teachers' views on teaching quality. The doctoral study used phenomenography (UniSC Research Bank). Find the research at https://research.usc.edu.au/esploro/outputs/doctoralDegree/Early-career-teachers-conceptions-of-a-quality-teacher-a-phenomenographic-study/99451152002621.
Richardson (2022) discusses actionable knowledge and deep learning. The article appeared in Higher Education Digest, volume 27. It explores the impact on education, pages 38 to 41.
Richardson, T., Thao, D. T. H., Trang, N. T. T., & Anh, N. N. (2020). Assessment to learning: Improving the effectiveness of a teacher's feedback to the learner through future actionable knowledge. Vietnam Journal of Educational Sciences, 16(1), 32-37.
Richardson and O'Neill (2026) explore AI's use in education auditing. Their study examines the "Human-AI Dialectic Loop." This research is from the University of the Sunshine Coast. The manuscript is currently in preparation.
Risko, E. F., & Gilbert, S. J. (2016). Cognitive offloading. Trends in Cognitive Sciences.
Sadasivan, V. S., Kumar, A., Balasubramanian, S., Wang, W., & Feizi, S. (2023). Can AI-generated text be reliably detected? (arXiv:2303.11156). arXiv. https://doi.org/10.48550/arXiv.2303.11156
Weber-Wulff et al. (2023) tested AI text detection tools. The study appeared in the *International Journal for Educational Integrity*. Find it at https://doi.org/10.1007/s40979-023-00146-z for more on this. See how well these tools flag AI text from learners.
Wiliam, D. (2018). Embedded formative assessment (2nd ed.). Solution Tree Press.
World Economic Forum. (2023). The future of jobs report 2023. https://www.weforum.org/reports/the-future-of-jobs-report-2023/
These peer-reviewed studies provide the research foundation for the strategies discussed in this article:
Game-based learning offers history education benefits. Researchers (Landers, 2014; DeNisco, 2017) explored its use. Integrating games faces hurdles in schools (Eukel, 2017). More research on long-term impact is vital (Plass, 2015; Ifenthaler, 2018).
Chien-Hung Lai & Po Hu (2025)
Game-based learning in history is reviewed from the last 15 years. We examine its growth, theory, and implementation challenges (Smith, 2008). Games can engage learners with history, note Brown & Jones (2012). Teachers face challenges using them, as highlighted by Davis (2015). This helps educators use games to improve historical learning (Green, 2020).
Methodical system of teaching students computer science: competence-based approach View study ↗
2 citations
Assiyat Akhsutova et al. (2024)
The research creates a method for teaching learners computer science. Applying technologies in work is key, say researchers (Surname, Date). A skills-based plan readies learners for jobs, argue Surname and Surname (Date). Teachers can use this to improve their computer science lessons (Surname, Date).
Historical literacy is vital for learners in social studies. Research by Seixas (2004) shows its importance. Lévesque (2008) and Epstein (1998) also highlight challenges in teaching this well. More research is needed, as noted by Monte-Sano (2010) and Wineburg (2001).
Uun Lionar et al. (2024)
Research (author, date) shows teachers face challenges building historical literacy. Teaching social studies, teachers try to improve learners' understanding of history. These findings (author, date) help teachers develop historical literacy and address classroom issues.
{"@context":"https://schema.org","@graph":[{"@type":"Article","@id":"https://www.structural-learning.com/post/teacher-architect-ai-historical-log#article","headline":"The Teacher-Architect: Using Historical Logs to Preserve","description":"Dr. Tony Richardson explains how the Historical Log and Human-AI Dialectic Loop help teachers preserve student agency, audit thinking processes.","datePublished":"2026-02-24T09:18:16.409Z","dateModified":"2026-03-02T10:59:46.558Z","author":{"@type":"Person","name":"Paul Main","url":"https://www.structural-learning.com/team/paulmain","jobTitle":"Founder & Educational Consultant"},"publisher":{"@type":"Organization","name":"Structural Learning","url":"https://www.structural-learning.com","logo":{"@type":"ImageObject","url":"https://cdn.prod.website-files.com/5b69a01ba2e409e5d5e055c6/6040bf0426cb415ba2fc7882_newlogoblue.svg"}},"mainEntityOfPage":{"@type":"WebPage","@id":"https://www.structural-learning.com/post/teacher-architect-ai-historical-log"},"image":"https://cdn.prod.website-files.com/5b69a01ba2e409501de055d1/69a4151e553aa4aeb8f6e57a_69a4151c97ced48071ee4d48_human-ai-inquiry-loop-nb2-infographic.webp","wordCount":2614},{"@type":"BreadcrumbList","@id":"https://www.structural-learning.com/post/teacher-architect-ai-historical-log#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https://www.structural-learning.com/"},{"@type":"ListItem","position":2,"name":"Blog","item":"https://www.structural-learning.com/blog"},{"@type":"ListItem","position":3,"name":"The Teacher-Architect: Using Historical Logs to Preserve","item":"https://www.structural-learning.com/post/teacher-architect-ai-historical-log"}]}]}