The ethics of artificial intelligence

Researcher profile: Nicholas Novelli

Photo provided by Nicholas Novelli.

When you think of artificial intelligence (AI) you may picture an android like Data on Star Trek, the artificial general intelligence of The Terminator‘s Skynet, or any of the many incarnations of AI that have been embraced by popular culture.

As our level of technology and understanding of human neural mechanisms continues to grow, so will the prevalence of AI in our lives. We have AIs competing on Jeopardy!, playing chess, and even answering questions for us on our smartphones. So what happens if we develop an AI that seems human enough to treat it as such?

This is where the scholarship of recent University of Manitoba philosophy graduate Nicholas Novelli comes in. For the last few years, he has been researching the “ethics of artificial intelligence – what it would take for machines to have moral standing.”

In his provocatively titled MA thesis, “Adventures in space racism: going beyond the Turing Test to determine AI moral standing,” which draws attention to potential parallels between treatment of AI and racial discrimination, Novelli set out to test the implications of how our current moral theories would deal with the advent of an AI that was able to pass for a human.

Novelli’s thesis was conducted within a tradition of philosophical inquiry called Analytic philosophy, which he explained “works on the basis of logical arguments: taking certain claims and showing what follows logically from them.”

More specifically, he investigated a number of moral theories and came up with “thought experiments to test them – imaginary situations that might have very different implications than everyday cases.”

Novelli gave a snapshot of what this process looks like.

“Some moral theories might seem very plausible and intuitive on first impression, but counterexamples can show that if you accept them, you would have to accept unpalatable conclusions.”

“For example, some have proposed that individuals have moral standing in virtue of having a sufficiently complex network of dispositions,” he said. “So a sufficiently advanced computer could have moral standing as well.”

“But if we accept that as the criteria for moral standing, we could imagine an otherwise normal but extremely unambitious human being who has very few dispositions – maybe the same amount as a Roomba. We would then have to say that either robot vacuums have moral standing, or this human doesn’t – both obviously unacceptable. The process is to test theories until we get one that doesn’t seem to have any implications we can’t accept.”

From his new home in Edinburgh, Scotland, where he has begun his Ph.D. work at the University of Edinburgh, Novelli also spoke of some of the limitations of this type of research.

At the outset, it’s necessary to make a number of background assumptions to narrow the scope of the arguments being presented, Novelli said. If an individual did not agree with a background assumption – say for example that animals such as fish should not be considered in moral decision making, then the arguments that follow that assumption would not apply to them.

Having acknowledged these limitations, he chose the University of Edinburgh to continue his research, a university that’s the oldest centre for AI research in the UK and will allow him to expand the scope and applicability of his work.

Novelli said that there are already centres dedicated to exploring these and related issues, such as the the Centre for the Study of Existential Risk at the University of Cambridge and the University of Oxford’s Future of Humanity Institute. He recognizes the impending uncharted philosophical and moral questions with respect to artificially intelligent machines, and plans to be at the forefront of these discussions.