UNVEILING THE ALGORITHM – BIAS AND PREJUDICE IN GUISE OF TECHNOLOGICAL EFFICIENCY AND FAIRNESS

Sophisticated algorithms taking over simple, as well as complex, human activities are becoming increasingly common in all aspects of life. An algorithm does not get tired, distracted, bored or sick, it has better memory and almost unlimited resources to compile information. Scientists, programmers, and entrepreneurs recognized the potential of the exponential technological advancement in making our lives easier, more efficient and fair, and have been acting upon this realization by developing more uses thereto. No wonder machines are winning games such as Jeopardy, Go and Chess, as well as being developed to take over actions such as driving cars, operating airplanes, conducting surgery and legal tasks. That being said, the use of algorithms in conducting tasks that require a sense of justice or fairness is where these algorithms fail, inter alia, in assessing defendants’ risk of recidivism for bail hearings or sentencing.

On the face of it, risk assessments of defendants at an early stage of the criminal procedure is crucial for an abundant of reasons that can be categorized as either fair or efficient. The overcrowding of jails, as ProPublica points out, calls for a decrease in their occupancy; a swift risk assessment of defendants can justify releasing them, with or without bail, until the proceedings are completed, thus reducing the number of new detainees. Furthermore, the use of an algorithm that can swiftly produce this assessment saves time, manpower and is cost effective. An accurate risk assessment, detached from human instincts, emotions, and biases are allegedly objective and by predicting the chances of recidivism allows for less crime prone defendants to be awarded better treatment and a more moderate deprivation of their rights, thus serving justice better.

In reality, researchers point out that these widely-used defendants risk assessment algorithms are biased, and more specifically racist, due to their underlining methodology and objectives. While these algorithms are accurate in their recidivism projectiles in 60 percent of the cases, they are also prone to miss-categorize black people as high-risk defendants twice as much as white people. In other words, out of the 40 percent of cases where the algorithm is wrong in its risk assessment, it is twice as likely to mark black defendants as high risk than it is to assert white defendants as such. The opposite is also true, the algorithm is twice as much prone to mark white defendants as of low risk than black defendants. It should be emphasized that these conclusions are drawn from applying the same algorithm, containing the same questions, to all subjects. Thus, while the programmers, as well as the algorithm itself, do not seem prejudice or biased, the output of the algorithm is.

In studying the algorithm, researchers came to a dichotomous conclusion: Algorithms can either be written to fairly predict recidivism, or be racially fair in error, but not both. This conclusion is consistent with other studies that find artificial intelligence, a sophisticated algorithm meant to imitate human mind and reason by constantly learning from both real life interactions and limited or unlimited databases such as the internet, to be racist, sexist and unfriendly towards women; making algorithms wolves in the guise of sheep.

Speaking of wolves, it should be recognized that our preconceived notions of people are faulted and engrained with prejudice, and by detaching instincts and emotions we de facto eternalize them into the future in the form of “fair and objective” algorithms. In fact, according to Richard Werner, it is a common misunderstanding that our conscious brain – in charge of reasoning, is solely responsible for decision making.[1] Rather, it is our unconscious brain, instincts and emotions, that pop up the decision in the conscious brain, and the latter tames the former and rationalizes its conclusions into a workable language. Our instincts and emotions are intrinsic to our decision-making process, they are unreasonable, fluid and changing, they can be prejudice as well as just, essentially, they serve as a double-edged sword for better or worse. Algorithms rely solely on the rationalization of past instincts and prejudice, they are lacking flexibility, individuals’ wisdom derived from past experiences and the ability to strike a balance between old and new values, as such they cannot substitute humans in making just decisions.

Sophisticated algorithms, detached from human prejudice and instincts seem appealing for the sake of justice and fairness; but only if one believes the fallacy of them being objective and able of being moral and just agents. As long as humans design algorithms, the latter will be endowed and fixated with prejudice. Unlike humans, algorithms would not be able to tell whether a logical conclusion based on preprogrammed equations is racist, sexist, or bias in any other form. Until they will be able to do so, I argue that any use of algorithms for normative decisions should be scrutinized, meticulously validated and still construed as intrinsically biased and suspicious, as well as not serve as a decisive factor in making these decisions.

[1] Richard Werner, Just War Theory Going to War and Collective Self-Deception, in Routledge Handbook of Ethics and War Just war theory in the twenty-first century 35, 36 (Fritz Allhoff et al. eds., 2013).

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s