By: Jaden Yocom
Billionaire James Goodnight has recently steered his software analytic company SAS towards a one billion dollar investment in artificial intelligence development. Goodnight’s enthusiasm over the investment is unmistakable, as he claims that by using artificial intelligence “SAS helps businesses deter damaging fraud, fight deadly disease, better manage risk, provide exemplary service to customers and citizens, and much more.” Unfortunately, not everybody can share in the CEOs enthusiasm. I, for example, am concerned about the truth of his sentiments. Beyond that, I believe that he is only noting the intended positive results of developing artificial intelligence. He is missing the potential negative consequences entirely. One substantial concern over artificial intelligence involves the ethical implications of such technology. There are three prominent ethical considerations regarding artificial intelligence. First, there is a question regarding the act of producing an intelligent entity and whether this should be pursued by humans. Second, one must subsequently ask about our moral obligations, if any, to the intelligence we create. Third, there is a definite utilitarian ethical concern in investing in creating new intelligence while much of the human life on Earth suffers and receives less consideration. These ethical concerns showcase the importance in treating AI with some morality.
The first ethical concern asks whether humans have a right to tamper in creating intelligence whatsoever. There may be many reasons that one argues that humans have no such right. Many religions would surely object on some grounds – and they have. In the past, Pope John Paul II argued that humans can never create anything that has a soul, that only God can do so. Then would it be unjust to create intelligence with no soul? Presumably some individuals would think so. Another possible reason lies in the realm of failure. We can imagine a plethora of ways humans fail at developing artificial intelligence. Creating a machine that only feels suffering and no other emotion, for example, is a possibility. Gifting a machine with intelligence temporarily and then taking it away (whether purposely or accidentally) is another possible event. These potential instances and others highlight the vast array of ways that artificial intelligence development could be immoral.
The second important ethical question is whether humans have any moral obligation to artificial intelligence, and if so, to what degree? In the future, there will likely be camps of people who believe there is no moral obligation to artificial intelligence on our part. Perhaps they will argue that no matter how intelligent a machine gets we still should have full control over it, as the creator. Conversely, there will certainly be individuals who claim that we do have a moral obligation to the intelligence we create. I expect there would be different degrees to which people feel obligated. Some may believe we should simply ‘do no harm’ to the intelligence that we create. Others may argue for substantially deeper obligations, such as full protections under the label of ‘personhood’. It is fundamental that we engage in the critical conversation regarding these potential obligations and clear that artificial intelligence deserves some degree of respect.
The final ethical concern stems from a utilitarian perspective of morality. The utilitarian believes that an action is morally just if it maximizes collective happiness; it is sometimes referred to as the “Greater Good theory”. To see how this relates to artificial intelligence one must only ask a simple question: is resource expenditure into the development of artificial intelligence the optimal strategy of maximizing the world’s collective happiness? To put this another way: is there any alternate areas where these resources could be allocated that would yield more happiness? These questions demand an answer, and a piece of that answer will lie in giving moral consideration to artificial intelligence.
In conclusion, artificial intelligence has both benefits and risks involved, especially in regards to morality. Artificial intelligence is already being researched and developed, meaning the question over a human right to construct artificial intelligence has been ignored. But the more salient questions remains: does artificial intelligence deserve moral consideration? The answer is yes. The three ethical concerns noted showcase the importance of creating some sort of ethical procedure to follow while constructing and working with artificial intelligence.