AI Shades, Polarizing Society? Not if the Algorithmic Justice League Can Prevent It!

Technological wonders abound and have made our lives so much easier. The information tech able to sift through store sift through and enable us to work better is a wonderful tool but a lousy master. The rules it operates by are the result of human programming with all the implicit assumptions &/or incomplete or biased data sets used to train the algorithms and AI.

“Most of what we call AI is appled statistics. Just doing cool things with maths using data trying to come up with probabilities & predications (the advantage is that you can do so much more with the data than a human could. With deep learning algorithms start to 'teach' themselves. With machine learning you have to tell it what to do, with deep learning there comes a moment where the computer starts to study and find patterns and 'learn' - we don't always know how or why its learning - its a black box.”

Dr Stephanie Hare, about 31:30 in on 16 June 2020 BBC Digital Planet podcast

It has been said that AI is applied statistics on steroids and that deep learning is AI on steroids with sometimes very little transparency on how it learns or how it is making decisions. A whole body of research and expertise is evolving to support this with society coming to a late realisation that a solely tech-bros ethos and tech utopianism mindset may result in various tech dystopias instead. As is the case with many externalities regulation and responsibility allocation can help us ensure fairer, more just ethical outcomes of our evolving cyber physical civilization.

“If AI is a country then where is that country? It is clear to us that the country to which AI currently belongs excludes the multiplicity of epistemologies and ontologies that exist in the world”

Professor Genevieve Bell, 3AI

Listen to ‘AI ethics leader Timnit Gebru is changing it up after Google fired her’ on the ABC’s Science Friction here and find out what the Distributed AI Research Institute she founded is doing here. This podcast has a number of other links to related material including the Algorithmic Justice League. The ANU’s 3Ai under Professor Genevieve Bell is also seeking to ensure that we actively ask questions and design our cyberphysical future not be passive supplicants of AI and of course we should not worship the algorithms. They are useful tools but lousy masters!

In the particular AI use case of facial recognition we do not feel that companies should be able to appropriate billions of individuals’ images and identities without specific informed consent, enmesh those identities in its product, license that product widely for private profit, and continue with business as usual. This would seem to violate the reasonable belief that individuals should be protected from unwanted commercial exploitation of their identities, in the US this is codified within the legal right of publicity as argued in a November 2022 filing.

Technology Is Not Neutral: A Short Guide To Technology Ethics

Dr Stephanie Hare