AI in learning, leisure and work: ethical considerations
In my lecture to Goethe University Frankfurt students (Summer 2022, link to slides), I highlight what I think are key ethical concerns regarding Artificial Intelligence (AI) with respect to learning, leisure and work. A key question for me, in terms of social justice, is who are the invisible winners and losers?
Big issues include employment, loss of our humanity (the general AI argument), ethnicity, gender, smart weapons, and more. Regarding jobs for example, there is a lot of hype and fake news surrounding employment. AIs are already here and in mass use (e.g. Google translate and Alexa). Take the issue ‘we will lose our humanity’, there is lots to say and debate.
AI can now successfully auto scan x-rays or MRI scans and detect anomalies as well as humans (some call this a mundane or routine task) the argument being that this is leaving doctors to deal with patients. Donald Clark's Total Recall, which gets AI to analyse a video script and do interactive learning with it (in a narrow knowledge domain in training). So there lots to be positive about but a lot to do ... Take the Georgia Tech Bot (Leopold, T., 2017) – the students loved it!
This fits in with the idea that we let AI do mundane and repetitive tasks (what I, Cook, call the 50/50 partnership) and this is an idea picked up by AI ethicist Kate Darling: ‘Robots can be our partners’. The MIT researcher says that for humans to flourish we must move beyond thinking of robots as potential future competitors, buts says: “I worry that companies may try to take advantage of people who are using this very emotionally persuasive technology – for example, a sex robot exploiting you in the heat of the moment with a compelling in-app purchase. Similar to how we’ve banned subliminal advertising in some places, we may want to consider the emotional manipulation that will be possible with social robots.”
On the other hand, in 2020, a chatbot named Replika advised the Italian journalist Candida Morvillo to commit murder. A South Korean AI chatbot was pulled from Facebook after hate speech towards minorities.