Sign in / Join

Generative AI: three key ethical challenges

A dilapidated grey building is seen as in a 3D plan against a bright red background. 0s and 1s represent data streaming out of the windows and gaps in the roof.
Anton Grabolle / Better Images of AI / AI Architecture / CC-BY 4.0

I have had a break from AI and education (or at least writing about it) over the holiday period. But here goes again.

Generally there seems to be a little disillusionment setting in with generative AI. Commentators are noting the lack of innovative and convincing use cases and the slow take up in industry and business. It has also been pointed out that in extending the Open AI model, GPT4 may have killed off many start up initiatives.

Added to these woes is the failure to cure the so called Hallucination problem and growing concerns about the ecological impact of AI (or more likely for AI companies, the financial cost of running Large Language Models).

Cost is also a barrier for education of developing applications based on the commercial Large Language Models. AI is increasingly being used in education through its integration in products from Microsoft and Google, amongst others. However this raises the issue of whether AI is another tool to support the growing commodification of education.

In an excellent presentation at the UK ALT C winter conference, and comprehensively documented in her SubStack newsletter Imperfect Offerings, Helen Beetham interrogates the issue of generative AI and ethics through the lens of the impact of systemic inequities. She says AI tools [can] provide a valuable starting point to open conversations about difficult ethical questions about knowledge, understanding and what it means to learn and be human.

She questions whether any of the existing AI models we are using in universities and colleges currently meet the requirements of the new EU Artificial Intelligence regulations and questions how “as a sector, without building and maintaining our own models, or at least being part of an open development environment in which we can verify that all these requirements are being met.“

What is needed, she suggests “is for the sector to decide what kind of ecosystem it wants – a commons of shared tools, data and expertise, with an explicit public mission, or a landscape of defended ivory towers, each highly vulnerable.”

She concludes that:

We will also have to deal with three key ethical challenges, and I think we can only do this as a whole sector: the human and computing power required; the future of creative commons licensing in relation to synthetic models; and the equitable, open, unbiased and transparent use of data.

Leave a reply