Searching for the AI bridge builders

bridge, railway bridge, rail

Chraecker (CC0), Pixabay

We need to democratise access to AI but the language we use to talk about it is a barrier.

There is a lot of fear around AI advances and this is perpetuated when only the ‘big tech’ and the academics have access to the tools, the theory and the conversations. For me this is a major theme in AI ethics right now. We can’t have conversations about ‘the black box around generative AI’ as if everyone understands the concept. Similarly ‘language models’, ‘the dynamics of knowledge production’, or ‘neural networks’. I suspect that I have already lost a large chunk of my friends and family and we are only on the first paragraph.

We talk a lot about bias in the data; there’s a great advert doing the rounds on social media at the moment where an AI was prompted to draw Barbie dolls from around the world. Some of the results are quite a shocking reflection on our own stereotypes and cultural tropes with German Barbie depicted in a Nazi uniform and African Barbie carrying a gun. AI may have created the images but we have supplied the data. It is an accessible depiction of bias, we need more accessible depictions of AI concepts.

As academics, researchers and professionals, what we don't see so easily is the bias innate in our own use of language around AI. It is the same in all industries, in all academic circles across all disciplines, we are so used to discussing with each other that we become stuck in our bubble of understanding, of acronyms and concepts. What we need is a giant pin, and we need more AI Pioneers to bridge the gap between theory and practice. More people willing to stop and ask questions. More translators of AI speak. More people who are comfortable in both worlds, who do not feel alienated by the academic circles and equally do not alienate practitioners, who, lets face it, are the real experts here. It is the practitioners who will be finding innovative ways to teach with and about the tools, and as with all previous ed-tech advances, it is the practitioners who will work out how to ‘hack’ the systems to fit their contexts. It is also the trainers who will be on the ground working with learners with poor digital literacy, trying to engage and enthuse them to not be automated out of a job.

I’d like to think that my work and that the projects Pontydysgu are involved with fit the gap nicely, providing introductory materials and creative ways to use AI tools, but I was reminded by a group of trainers I ran a workshop with recently of the need to slow down, take things back to basics. 

When I first started out in edtech I was the trainer-in-training, in one session billed as a ‘hands-on practical introduction to e-learning’ the instructor showed us how learners’ work could be exhibited on a website - it was new and exciting, the dawn of web2.0, everyone in the room was eager to learn how. But we were then left with the bamboozling task of “now build a website.”

In my workshop, I heard the words “now use that to build a bot” escape my mouth and realised that the student had truly become the master. 

We need to remember to put the scaffolding into place so as not to lose people over the edge, and that includes explaining ourselves clearly or at least signposting people who can. To quote Einstein, “If you can't explain it simply, you don't understand it well enough” If you are one of those people, a gap-bridger, a mediator, an educator and also an AI enthusiast I warmly invite you to join the AI Pioneers network. Use the contact form on our website to get in touch, join in the conversation on Mastodon (like Twitter but without the megalomania) or find us via LinkedIn.

 

Leave a reply