AI in learning, leisure and work: ethical considerations


Abstract

AI in learning, leisure and work: ethical considerations

John Cook.


In my lecture to Goethe University Frankfurt students (Summer 2022, link to slides), I highlight what I think are key ethical concerns regarding Artificial Intelligence (AI) with respect to learning, leisure and work. A key question for me, in terms of social justice, is who are the invisible winners and losers?


Big issues include employment, loss of our humanity (the general AI argument), ethnicity, gender, smart weapons, and more. Regarding jobs for example, there is a lot of hype and fake news surrounding employment. AIs are already here and in mass use (e.g. Google translate and Alexa). Take the issue ‘we will lose our humanity’, there is lots to say and debate.


AI can now successfully auto scan x-rays or MRI scans and detect anomalies as well as humans (some call this a mundane or routine task) the argument being that this is leaving doctors to deal with patients. Donald Clark's Total Recall, which gets AI to analyse a video script and do interactive learning with it (in a narrow knowledge domain in training). So there lots to be positive about but a lot to do ... Take the Georgia Tech Bot (Leopold, T., 2017) – the students loved it!


This fits in with the idea that we let AI do mundane and repetitive tasks (what I, Cook, call the 50/50 partnership) and this is an idea picked up by AI ethicist Kate Darling: ‘Robots can be our partners’. The MIT researcher says that for humans to flourish we must move beyond thinking of robots as potential future competitors, buts says: “I worry that companies may try to take advantage of people who are using this very emotionally persuasive technology – for example, a sex robot exploiting you in the heat of the moment with a compelling in-app purchase. Similar to how we’ve banned subliminal advertising in some places, we may want to consider the emotional manipulation that will be possible with social robots.”


On the other hand, in 2020, a chatbot named Replika advised the Italian journalist Candida Morvillo to commit murder. A South Korean AI chatbot was pulled from Facebook after hate speech towards minorities.


To neuroscientists, the most intriguing development shown in the Nuralink demo, August 2020, may have been what Elon Musk called “the link,” a silver-dollar-sized disk containing computer chips, which compresses and then wirelessly transmits signals recorded from the electrodes. The link is about as thick as the human skull, and Musk said it could plop neatly onto the surface of the brain through a drill hole that could then be sealed with superglue. High class theatre indeed ... and food for thought for 2023.

Leave a Comment