I guess we are going to be hearing a lot about AI in education in the next year. As regular readers will know, I am working on a European Commission Erasmus Plus project on Artificial Intelligence and Vocational Education and Training. One subject which is constantly appearing is the issue of ethics. Apart from the UK universities requirements for ethical approval of research projects (more about this in a future post), the issue of ethics rarely appears in education as a focus for debate. Yet it is all over the discussion of AI and how we can or should use AI in education.
There is an interesting and (long) blog post - 'The Invention of "Ethical AI"' recently published by Rodrigo Ochigame on the Intercept web site.
Orchigame worked as a graduate student researcher in the former director of the MIT Media Lab, Joichi Ito’s group on AI ethics at the Media Lab. He left in August last year , immediately after Ito published his initial “apology” regarding his ties to Epstein, in which he acknowledged accepting money from the disgraced financier both for the Media Lab and for Ito’s outside venture funds.
The quotes below provide an outline of his argument although for anyone interested in this field the article merits a full read. the
The emergence of this field is a recent phenomenon, as past AI researchers had been largely uninterested in the study of ethics
The discourse of “ethical AI,” championed substantially by Ito, was aligned strategically with a Silicon Valley effort seeking to avoid legally enforceable restrictions of controversial technologies.
This included working on
the U.S. Department of Defense’s “AI Ethics Principles” for warfare, which embraced “permissibly biased” algorithms and which avoided using the word “fairness” because the Pentagon believes “that fights should not be fair.”
corporations have tried to shift the discussion to focus on voluntary “ethical principles,” “responsible practices,” and technical adjustments or “safeguards” framed in terms of “bias” and “fairness” (e.g., requiring or encouraging police to adopt “unbiased” or “fair” facial recognition).
it is helpful to distinguish between three kinds of regulatory possibilities for a given technology: (1) no legal regulation at all, leaving “ethical principles” and “responsible practices” as merely voluntary; (2) moderate legal regulation encouraging or requiring technical adjustments that do not conflict significantly with profits; or (3) restrictive legal regulation curbing or banning deployment of the technology. Unsurprisingly, the tech industry tends to support the first two and oppose the last. The corporate-sponsored discourse of “ethical AI” enables precisely this position.
the corporate lobby’s effort to shape academic research was extremely successful. There is now an enormous amount of work under the rubric of “AI ethics.” To be fair, some of the research is useful and nuanced, especially in the humanities and social sciences. But the majority of well-funded work on “ethical AI” is aligned with the tech lobby’s agenda: to voluntarily or moderately adjust, rather than legally restrict, the deployment of controversial technologies.
I am not opposed to the emphasis being placed on ethics in AI and education and the debate and practice son Learning Analytics show the need to think clearly about how we use technology. But we have to be careful that we firstly do not just end up paying lip service to ethics and secondly that academic research does not become a cover for teh practices of the Ed tech industry. Moreover, I think we need a clearer understanding of just what we mean when we talk about ethics in the educational context. For me the two biggest ethical issues are the failure of provide education for all and the gross inequalities in educational provision based on things like class and gender.