AI: How do we make explainability open to practitioners and practice?

Over the last six or so weeks I have been doing a series of interviews about AI in education for the AI Pioneers project. One issue was who to ask. Just about everyone interested in the future use of technology for education has something to say about AI right now. I stumbled towards two main criteria. Firstly to ask people I had worked with before and who I valued their opinions. And second I wanted to include people who had worked on continuing professional development for teachers and trainers. At the end of the day, the interviews, together with a survey, form the main part of a Work Package in the AI pioneers project looking at the competences required by teachers and trainers for working with AI, with an objective of extending the EU DigCompEdu framework.

This week I am going to publish the (edited) transcripts of four of the interviews. I will follow this up next week with comments on some of what I think are the major issues, together with a podcast where you can listen to ideas from those interviewed in their own voice.

The first interview is with Helen Beetham.

Helen Beetham is an experienced consultant, researcher and educator, based in the UK, and working mainly in the field of digital education in the university sector. Publications include: Rethinking pedagogy for a digital age (Routledge 2006, 2010 and 2019, with Rhona Sharpe), Rethinking learning for a digital age, numerous book chapters and peer reviewed academic papers, including recently an edited Special issue of Learning, Media and Technology (2022). Current research centres on critical pedagogies of technology, and subject specialist pedagogies, in the context of new challenges to critical thinking and humanist epistemology.

She has advised global universities and international bodies on their digital education strategies, and led invited workshops at over 40 universities around the world as well as working on the development of DigCompEdu. Her Digital Capabilities framework is widely used in UK Education, in Health Education, and in other national education systems.

Helen went to university to study AI! She is currently writing up her research into digital criticality in education and writing a substack, 'Imperfect Offerings', focused on the challenges of Generative AI.

Digital Literacy, Capabilities and Ethics

Helen explained that her work on digital Literacy was based on a Capabilities Framework intended as a “steady sate” but could be updated to address changing technologies. She said that “even more than most, teachers and trainers do not have digital capabilities and that technologies have to be available without being subjected to big technology companies. They need online materials and learning routes and pathways, not just imposed from above. Who can build Foundation Models she asked given their complexity and cost. There has to be choices for colleges and collectives.  Education needs its own Language Models, Foundational models and data. AI cannot be ethical if there is no choice.

This means we need capability in the education sector to build ethical versions of Foundational Models. Although she has always considered pedagogy to come first in the use of technology for teaching and learning, today we need technical models within the education sectors. We need to stop the brain drain from the sector and mobilise the community for development. We can reduce the risk of AI through using open development models.

Helen asked what it means to be ethical. She pointed to other development in education such as the campaign to decolonize education.

Conversations and Practice

The contribution of Frameworks like The Capability Framework or DigCompEdu is that they lead to conversations and those conversations lead to changing practice. Initiatives from bodies like Jisc and the European Union create initiatives and spaces for those conversations.

Explainability

Can we open the black box around Generative AI, she asked. Expainability is a good goal, but how do we make explainability open to practitioners and practice? She has been trying to develop explainability through the labour situation and the restructuring of employment through AI.

Further need for explainability relates to the meaning of words and concepts like learning, intelligence and training. All the models claim to use these processes but there is a need to explain just what they mean in developing Foundational Models and how such processes are allied in AI.

Wikipedia, weightings and AI

In a wide ranging interview another issue was the use of Wikipedia in large language models. How were weighting derived for data from Wikipedia and was in fact Wikipedia over-influential in the language models. What should Wikipedia do now – should it develop its own AI powered front end.

Future of employment

Looking at the future of employment it seems likely that fewer skills may be needed to do the same things that were undertaken by skilled workers prior to AI. Yet universities are assuming that they can train students to take up high level roles that GPT cannot touch. Yet it is these roles that automation is reducing with increasing impact on the professional middle classes. It seems more likely that GPT automation will have less affect on technical skilled work, especially those tasks which require unpredictable activities in less certain contexts – jobs that are based on vocational education and training.

You can subscribe to Helen Beetham's substack, 'Imperfect Offerings' at https://helenbeetham.substack.com/.

1 comment

Leave a reply