Sign in / Join

Imperfect Interactions – should we develop EdGPT?

As you may seen from the media, last Friday the Board of Open AI, sacked the CEO, Sam Altman. In a confusing weekend, Altman initially announced he was going to work for a new AI laboratory funded by Microsoft, before late on Monday returning to Open AI with a shake up of the Board following a rebellion by Open AI staff.

It appears the original argument has been between those wanting to speed up generative AI development and those urging a slower and more cautious approach emphasizing the importance of safety, although it may well reflect more a conflict between the stated mission of developing Artificial General Intelligence for the good of the world and Microsoft’s growing influence in using Open AI technology throughout its software suite.

But what do these corporate games mean for education? Firstly it shows that the present approach to AI development is in a state of flux. Only two weeks ago a developers event, Altman revealed a new product, GPTs, allowing users to develop their own applications based on GPT4 software. Fine, and apparently promising for education, but at the same time probably making irrelevant many of the smaller projects and enterprises trying to develop applications based on Large Language Models.

But the whole episode shows that AI is increasingly the preserve of a very small number of large corporations. And as Helen Bentham said in recent interview I has with her, in education we have become used to the development of relatively good educational technology, including Open Source.

With Generative AI we don’t know what the next developments will bring and we don’t know how much they will cost. Large Language Models are costly to develop and despite seemingly being quicker to develop, are beyond the resources of educational institutions. We know there is a strong link between technology and pedagogy. There is a danger that pedagogy will come to be determined by a small number of very wealthy technology companies. At the same time it may be that (paid for) online learning, utilising AI, replaces public educations services. Arguably it already is for language learning, with the (not for profit) company DuoLingo dominating the field.

As Riina Vuorikari recently said AI-driven systems in education and training challenge the idea of teaching professionals using their pedagogical judgement to take decisions. She pointed to the need for a “balance between human autonomy and machines, datafication of education, pedagogical models.”

UNESCO Guidance discussed EdGPT:

“EdGPT models are trained with specific data to serve educational purposes. In other words, EdGPT aims to refine the model that has been derived from massive amounts of general training data with smaller amounts of high quality, domain-specific education data.”

But Fengchun Miao, UNESCO Chief, Unit for Technology and AI in Education has questioned this approach:

“Taking into consideration the scale of funding, data, computing, and technology capacities needed to train foundation models, whether this is the right direction is contestable.”

He wonders if sing a much smaller scale of and much more static education data will loss the advantages of the large Language Models and how can cultural and linguistic diversity be reflected in EdGPT models?Like other commentators he asks what should be the boundary between human agency and the machine agency in pedagogy and how can human agency be protected while motivating human creativity ?

What seems incontestable is the need for closer collaboration between educational technologists, software developers and pedagogic researchers and practitioners in developing and evaluating AI tools for education.

Finally, returning to pedagogy, Chris Goodall says:

While it’s important that we always strive to refine AI tools, it’s vital to maintain a balance. Accepting that AI will never be entirely devoid of faults allows to using AI as a vehicle to teach AI literacy, resilience and adaptability in our students. It’s in the unpredictable and imperfect interactions with AI that I have learnt most and that students will learn to navigate complexities, think critically, and develop a problem-solving mindset.

Leave a reply