In the latest edition of Dr Phil's newsletter, entitled 'The Impact of Gen AI on Human Learning: a research summary' Phil Hardman undertakes a literature review of the most recent and important peer-reviewed studies.
And in contrast to some of the studies currently coming out, which tend to claim either amazing success or doom laden failure for the use of AI for learning, she adopts an analytical and nuanced viewpoint, examining the evidence and providing a list of key takeaways from each report, leading to implications for educators and developers.
Here are the Key takeaways from each of the five studies.
- Surface-Level Gains: Generative AI tools like ChatGPT improve task-specific outcomes and engagement but have limited impact on deeper learning, such as critical thinking and analysis.
- Emotional Engagement: While students feel more motivated when using ChatGPT, this does not always translate into better long-term knowledge retention or deeper understanding.
- Over-reliance on AI tools hinders foundational learning, especially for beginners.
- Advanced learners can better leverage AI tools to enhance skill acquisition.
- Using LLMs for explanations (rather than debugging or code generation) appears less detrimental to learning outcomes.
- Scaffolding Through Customisation: Iterative feedback and tailored exercises significantly enhance learning outcomes and long-term retention.
- Generic AI Risks Dependency: Relying on AI for direct solutions undermines critical problem-solving skills necessary for independent learning.
- Offloading Reduces Cognitive Engagement: Delegating tasks to AI tools frees cognitive resources but risks diminishing engagement in complex and analytical thinking.
- Age and Experience Mitigate AI Dependence: Older, more experienced users exhibit stronger critical thinking skills and are less affected by cognitive offloading.
- Trust Drives Offloading: Increased trust in AI tools encourages over-reliance, further reducing cognitive engagement and critical thinking.
- Confidence ≠ Competence: Generative AI fosters overconfidence but fails to build deeper knowledge or skills, potentially leading to long-term stagnation.
- Reflection and SRL Are Crucial: Scaffolding and guided SRL strategies are needed to counteract the tendency of AI tools to replace active learning.
As Phil Hardman says in the introduction to her article:
At the same time as the use of generic AI for learning proliferates, more and more researchers raise concerns about about the impact of AI on human learning. The TLDR is that more and more research suggests that generic AI models are not only suboptimal for for human learning — they may actually have an actively detrimental effect on the development of knowledge and skills.
However she remains convinced of "the potential of AI t:o transform education remains huge if we shift toward structured and pedagogically optimised systems."
To unlock AI’s transformative potential, she says, "we must prioritise learning processes over efficiency and outputs. This requires rethinking AI tools through a pedagogy-first lens, with a focus on fostering deeper learning and critical thinking."
She provides the following examples:
- Scaffolding and Guidance: AI tools should guide users through problem-solving rather than providing direct answers. A math tutor, for instance, could ask, “What formula do you think applies here, and why?” before offering hints.
- Reflection and Metacognition: Tools should prompt users to critique their reasoning or reflect on challenges encountered during tasks, encouraging self-regulated learning.
- Critical Thinking Challenges: AI systems could engage learners with evaluative questions, such as “What might be missing from this summary?”
Its well worth reading the full article. Phil Hardman seems to be of the few writing about AI from a pedagogic starting point.
About the Image
This illustration draws inspiration from Leonardo da Vinci’s masterpiece The Last Supper. It depicts a grand discussion about AI. Instead of the twelve apostles, I replaced them with the twelve Chinese zodiac animals. In Chinese culture, each zodiac symbolizes distinct personality traits. Around the table, they discuss AI, each expressing their views with different attitudes, which you can observe through their facial expressions. The table is draped with a cloth symbolizing the passage of time, and it’s set with computer-related objects. On the wall behind them is a mural made of binary code. In the background, there’s an apple tree symbolizing wisdom, with its intertwining branches representing neural networks. The apples, as the fruits of wisdom, are not on the tree but stem from the discussions of the twelve zodiacs. Behind the tree is a Windows 98 System window, opening to the outside world. Through this piece, I explore the history of AI and computer development. Using the twelve zodiacs, I emphasize the diversity of voices in this conversation. I hope more people will join in shaping the diverse narratives of AI history in the future.