By Jaden Yocom
ShotSpotter, a tech startup based in Newark, California, had their IPO in 2017 and since then have witnessed their market cap increase from $183 million to nearly $500 million. The company’s product? Software that can be implemented within public infrastructure in order to detect gunshots rapidly and accurately, and to immediately notify officials. According to the company’s leaders, this type of software is destined to change public law enforcement for the better by increasing response time and decreasing gun violence rates. ShotSpotter CEO Ralph Clark said that “the first and foremost thing that we tell police departments is that our system is going to make them aware of a lot more gunfire than they’ve ever been aware of before.” Clark, who is now worth around $30 million on his own, is not the only company leader who claims to possess technology that will improve some aspect of law enforcement and policing. And ShotSpotter is not the only company that will experience an astronomical influx of funds as a result of the potential of their product.
Over the past century and past 20 years especially, America has increasingly beared witness to the intersection of law enforcement and technology. And with such heavy interaction between these fields, it becomes important to not only ask the proper scientific and legal questions, but also to ask the proper philosophical questions. For example, should ShotSpotter’s software, which depends on city-wide implementation in order to successfully triangulate any sources of gunfire, be considered overly powerful? Perhaps as of today it can only detect gunshots, but imagine if it could listen in on all human conversations. Furthermore, imagine that it could understand those conversations, and, if it deems it necessary, report law enforcement. Should such a software be questioned due to privacy rights and similar objections? Or should we employ Utilitarianism and argue that if an AI program is capable of understanding conversation related to crime-committal, we should allow it to become an “officer” in its own unique way, for the greater good of the people? The ethical considerations that must be made as these areas continue to merge at high speeds are overwhelming. One can imagine that the software eventually becomes improved upon paired with hardware to the point where it has a body and can think, feel, and move on its own. What then? What happens when we reach “RoboCop”?
As the rate of technological innovation continues to exponentially increase, these questions must not be abandoned in the rear view. AI programs are being developed, improved, and implemented all within the same breathe, and few companies are taking the time to consider the potential consequences of such expansion. Here, I will seek to correct that, on a small scale, by providing a philosophical analysis of the use of artificial intelligence in law enforcement.
Artificial intelligence already exists in law enforcement and the legal system today. Software dedicated to identifying locations of high crime probability, reporting crimes accurately, assisting in surveillance, and even judging a criminal’s eligibility for parole are all in use today. I believe that there are three fundamentally important dimensions when it comes to discussing the use of AI in law enforcement. There are certainly many other ethical and philosophical questions to be explored, but here I will go through the central three. First, issues of biases arising in automated decision making software have been problematic for a long time. Second, the question of the importance of emotional AI is especially relevant when discussing the intersection with law enforcement. Third, predictive policing may be determined to be immoral if it is discriminatory. I will conclude by proposing the formation of a technology-ethics position for all active police departments as well as companies working on innovations for law enforcement. The interplay between technology and law enforcement is simply too great to continue to allow it to occur without critically examining the moral factors at play.
BIAS IN ARTIFICIAL INTELLIGENCE PROGRAMS
The problem of bias in artificial intelligence programs has been well-documented – if the humans who write code for these programs are prone to biases that they most often can not sense or control, then why would their software not be? The challenge presented by biases has overwhelmed several attempts at artificial intelligence. For example, in 2016, Microsoft’s attempt at creating an artificially intelligent Twitter chatbot rapidly crashed and burned – the billion dollar company had to pull the bot, ‘Tay’, from the site after it began posting racist, sexist, and insensitive comments online. When it comes to the discussion of whether artificial intelligence applications should be designed for use in law enforcement, the question of bias control becomes even more important. America already experiences a great deal of discrimination and bias from law enforcement. If programs that are supposed to deter or address crime are found to possess similar biases, it certainly seems unethical to continue their use.
Imagine that RoboCop-like technology is developed: hardware that possesses artificial intelligence software and is able to act as a rational agent. Now imagine that within the AI software, there exists hidden biases similar to those that America commonly experiences. The program would then continue to gather data and learn from the world around it, which contains ample bias. The program is now encountering two issues: first, it contains internal bias which is causing it to have an unrealistic perspective of the world around it; second, it is actively reaffirming these biases because it is experiencing external bias as it observes the world around it. The result of the deployment of this invention or similar technology, without having first determined a solution to the multidimensional bias problems, would be discriminatory law enforcement practice. America certainly does not need any more of that.
EMOTIONAL ARTIFICIAL INTELLIGENCE
The second important ethical discussion to be had here regards the development of emotional AI. Many industry professionals and thought leaders have stressed the importance of creating artificial intelligence software that can interpret or experience emotion. For example, Rana El Kaliouby founded the company Affectiva with the self-proclaimed mission of humanizing technology by developing emotion AI. Since then, several companies have followed in Affectiva’s footsteps, as it has experienced growth from start-up into multimillion dollar company.
When it comes to using AI programs in law enforcement, there is an important question to ask: should these programs be required to possess “EQ”, emotional intelligence? The answer may vary given the specific application of the given program, but there are certainly some convincing arguments for responding in the affirmative. For example, imagine the following case: an AI program is implemented in areas with high crime rates. It is tasked with calling the police when crime occurs. One day, it gathers the information for a police call based on data consisting of children screaming, gunshots, and many individuals being grouped up. In reality, people are watching fireworks – the loud noises are misinterpreted as well as the screams, because the program cannot fully understand context. As a result, the people in this area are frequently subject to random police visits over faulty reports.
This seems to be a clear cut instance of emotional intelligence being highly relevant for artificial intelligence programs. And hundreds of other applications for AI in law enforcement exist, so this question must be asked repeatedly. Perhaps it should even be made standard industry practice to build artificial intelligence systems with emotional intelligence. There certainly seems to be several benefits of doing so.
The third primary ethical concern regarding artificial intelligence in law enforcement is whether predictive policing, a practice in which data and prognostic methods are used to try and deter crime before it happens, is morally justifiable. Andrew Ferguson, a law professor and big data expert, has brought up these ethical questions in the past: “there’s a real danger, with any kind of data-driven policing, to forget that there are human beings on both sides of the equation.” Many opponents of predictive policing believe that introducing such a system will lead to discriminatory over policing, which results in a cycle of further data being gathered that ‘justifies’ such policing due to continued high crime rates. Several activists, industry leaders, and academics have expressed concern over this method of doing police work. Moish Kutnowski notes that there are significant “dangers and flaws of predictive policing as a discretionary tool used to justify questionable processes and biases”
On the other hand, proponents of predictive policing are confident that it can be used in ways to benefit at-risk communities. If crime is policed more accurately and more efficiently, their logic follows that high-crime rates will begin to decline into lower crime rates over an extended period of time, with the proper system and implementation. However, the success has yet to be seen – the Los Angeles Police Department terminated its predictive policing initiative in 2019 due to public backlash and a lack of sufficient oversight on the project. Still, many companies continue to explore predictive policing strategies and multiple cities across the U.S. utilize some form of it.
An initial step must be taken in order to ensure the ethical use of all technology that is available to law enforcement. The proposal I suggest is that all police departments, law enforcement agencies, and private companies working on technology for these aforementioned organizations, must implement ethics into their practice. This can be done via creating one or more positions for ethics officers or ethics committees in which experienced experts are tasked with guiding the use of thesen new and advanced technologies. Ethics officers would help in determining the correct decisions to make in cases where the moral lines may be blurred. This description certainly fits for many instances of AI use in law enforcement.
There are 5 foundational arguments for developing such a position within these organizations:
- Difficult ethical decisions are no longer being handled by individuals without the proper training – engineering degrees and experience in the law enforcement field are certainly valuable, but complex ethical decisions require a different type of skill set and experience.
- Internal organizational powers may be balanced in a better way with the implementation of such a position – by giving ethics officers certain powers within an organization, problems like corruption and toxic workplaces will occur far less frequently.
- There is no conflict of interest between the individual making ethical decisions and their organization – whereas within private organizations and public law enforcement agencies there is incentive to arrest as many individuals as possible, an ethicist does not possess the same incentives.
- This position would yield a great deal of valuable information and data to analyze for the future – by making sure to begin recording the actions (and the justifications for those actions), private companies and law enforcement agencies will be able to produce large amounts of data to analyze and use in order to further improve the related institutions.
- Ethics deserves far greater consideration than it is given today – as Plato once remarked “There will be no end to the troubles of states, or of humanity itself, till philosophers become kings in this world, or till those we now call kings and rulers really and truly become philosophers, and political power and philosophy thus come into the same hands.” When there is little dedication to pursuing morality and justice, nobody should expect it occur. However, with an increase in ethical decision making, law enforcement could rapidly change for the better.
In conclusion, the proper way to address the ethical dilemmas generated by the use of artificial intelligence in law enforcement is to do so head-on, by employing the skills of experienced ethicists and moral philosophers. The three concerns I have addressed here – bias within AI, EQ within AI, and the justification behind predictive policing – are highly relevant as the use of this technology in the field of law enforcement continues to increase.
Nearly every institution in modern America is rapidly intersecting with technological innovation, from education to healthcare to law enforcement. However, the only logical way to pursue the best outcome for all parties is to be able to guarantee that the practices being taken up have been subject to extensive critical analysis and revision. Without the help of an ethicist to guide these emerging technologies as they influence our lives, human beings will surely continue to encounter more pressing ethical issues.