Comments welcome
This experience eventually contributed to my career as academic and research professor of people centred technology for learning. But, I am slightly ashamed to say that I unconsciously hid my northern roots and accent when I joined full-time academia in the 1988. Instead, I chose to project the arty, musician, middle class values I had picked up from rubbing shoulders with creative people whilst in Camden Town in the early 1980s (squatting and generally being an indie musician). Indeed, a survey by the Sutton Trust (2022), conducted by academics from the Accent Bias in Britain project, has found that one in four have accents mocked at work. The BBC (2022) have summarised the report:
In my last lecture (Cook, 2022), at Goethe University in the Summer of 2022, I looked at 'AI in learning, leisure and work: ethical considerations'. In this course I helped undergrads build their own Chatbots (this was all pre-LLM, Large Language Model, AIs like ChatGPT) and I also got them to use Google Lens, an AI driven image recognition technology. I also went on to highlight that a key question for me in terms of social justice was: who are the invisible winners and losers? This remains true today in 2026. I believe digital and AI literacy needs to be mainstream and yes the stakes are high. For example, consider these recent examples/issues (the list could go on):
- X's Grok represents the use of AI to harm women
- Fears of LLMs like ChatGPT: "its potential to spread misinformation, facilitate cheating in education, replace jobs, erode critical thinking, perpetuate biases, and create over-reliance, leading to loss of human autonomy; some also fear it might develop human-like (even malevolent) traits, while others worry about data privacy and AI's unpredictable "hallucinations"." (Source Google AI!)
- Fears about AI in the Arts and music. For example Paul McCartney thinks AI could "rip off" artists by using their work without permission to train AI models.
- Generation fears of social divide: "Warnings of a “big split” in society between people who grasp how AIs work and are able to control them – challenging their increasing role in automating decisions in areas including housing, welfare, health, criminal justice and finance. On the other hand, there could be a cadre of AI illiterates who risk social disempowerment" (Booth, 2026).
How do we make the winners and losers visible? First we need to foster a common language, or common ground, that sows the seeds for building a nuanced understanding and broad dialogue around: ethics, social mobility and justice, literacies, competencies and power. And, we need to scaffold hands-on experience of using these tools.
I have avoided discussing the extensive literature on data, digital and AI literacies and will post about this down the line. AI raises many other concerns that I have dodged, particularly the 'singularity' idea and smart weapons. I still feel that the ZoP offers a grounded analytical tool that remains relevant for looking at social mobility and learning. Just like the Hammersmith and Fulham Playbus Scheme that Ian Miller championed, I also believe in getting your hands on tools and actively learning in partnership with these AI tools; in fact I once came up with a design principle for this.
