Not-so-critical AI literacy

This image features 3 images of a street. Overlying the image are different shapes which are arranged to look like QR code symbols. These are in white/blue colours and intersect one another. The first image is clear, but the second is slightly more pixelated, and the final image is very pixelated.

Picture credit: Elise Racine & The Bigger Picture / Better Images of AI / Web of Influence I / CC-BY 4.0

This week I’ve heard a couple of presentations about ethics and accessibility in educational technology, particularly AI and XR. And each time I’ve come away feeling both scolded and disempowered.

So much of the ‘critical’ academic discourse on ethics and accessibility in relation to new technology does not bother to take the time to carefully define, categorise and evaluate particular technologies and the associated ethical or accessibility issues, and therefore it mostly fails to propose credible actions or mitigations for institutions, disciplines or individuals.

Much of this work also promotes a passive, even fatalistic, attitude, which feeds the sector’s (selective and hypocritical) technophobia, and erodes our agency and power to intervene in and shape the tech ecosystems of which education is part.

My concern is that, while the pompous hand waving produces dopamine shots and conference papers for those with hands to wave, it is not moving us forward – rather, it’s paralysing higher education at a critical moment.

BTW this is NOT an anti-EDI rant. It’s a sincere call for us to do better, to take the time to inform ourselves about technologies, and to reclaim our agency and the courage to exploit technology for good while we still have half a chance.

Leave a comment