Not-so-critical AI literacy

Much of the ‘critical’ academic discourse on ethics and accessibility in relation to new technology does not eat its own dog food. It doesn’t bother to take the time to carefully define, categorise and evaluate particular technologies, and it mostly fails to propose credible actions or mitigations.

Please mind the gap between higher education and the technology ecosystem

Higher ed has considerable power to positively influence the tech ecosystem, not by moaning about capitalism, or waiting to see what happens. We have to do it by quickly identifying and aggressively advancing innovative and progressive use cases. So when do we start?

Degenerative AI

For AI to be regenerative, it must enable us to generate and preserve real wealth. It must promote and sustain community wellbeing, fairness, and sustainability, the fundamental values of the generative economy, and it must do so by design, through its normal functioning, and not as a regulatory compliance exercise or CSR/ESG afterthought.

Techno-optimism, perverted histories and stolen futures

AI has been ‘leveraged’ since 2014 to ‘transform’ the oil and gas industry, by enabling exploitation of previously hard-to-access reserves of fossil fuels, in full awareness of the irreparable damage done as a result. Techno-optimism is not merely naïve. Future histories of ‘transformation’ are often part of a political project to protect and advance the rights and interests of extractive capitalism while destroying cultures, societies and the environment without a care.

EU removes ‘emotional robots’ from the classroom

The new EU AI Act bans ‘unacceptably risky’ AI applications, including emotion-aware systems which educational researchers believe can significantly improve learning in the classroom. Is the EU right to outlaw emotional AI for learning along with such nefarious uses as social scoring and behavioural manipulation? And with the diffusion of emotional AI in mainstream consumer products including virtual reality headsets, might the ban be a backward step for European learners?

Week notes: 17 July 2023

It may be wishful thinking to hope that AI will simply slot in to our current toolsets, making us more efficient at work. Even the current beta tools enable an order of magnitude increase in efficiency. Skill in doing a single thing will simply have no value. Instead, we will need skills in ‘multilearning’, or the ability to learn and deploy new knowledge and skills quickly. Very quickly.

Week notes: 3 July 2023

This week I began my research into individuals and teams who are able to 1) learn about new techs quickly while 2) discerning the impacts and applications in their domains while 3) starting to implement. How do they learn, select and adapt so quickly and effectively, while others do not? If you’re reading this and you are (or you know) someone or some team or startup that is learning, adopting and adapting new techs exceptionally fast, I’d love to talk.

Week notes: 19 June 2023

This week I am thinking about the innovation that isn’t. The innovation that doesn’t happen. I don’t mean innovation that fails. I mean innovation that simply doesn’t occur. The innovation that isn’t. Might machine and social learning reverse the stagnation of ideas?