Mitigating Automated Discrimination

Dylan McCreesh, Grade 12

Humans are biased creatures. That’s a simple, historically irrefutable, unavoidable element of our nature. In making decisions, humankind is notoriously hindered by a variety of innate biases. Hard-baked into our psychology there are heuristic shortcuts, self-preference biases, and situationally homophilous or heterophilous tendencies which all alter our judgement and limit our capacity for unbiased decision-making. Moreover, for humans, decision-making is a time and energy consuming, inefficient process. The limitations in human decision-making have inspired the development of Algorithmic Decision-making (ADM) processes – or more concisely, the development of technologically automated decision-making [1, 2, 4]. One might believe that these processes would be exempt from the biases that plague human decisions; after all, data is meant to be objective, factual, and unbiased. However, this belief is far from the truth, as human biases can permeate through human-created technologies. Despite the calculated nature of ADM processes, biases persist due to the human influence on the programs’ creation and entrainment. Datasets which inform the ADM process may be misrepresentative or unrepresentative of the subject of the data due to human bias in data collection [1, 2, 3, 6].This presents a serious issue, as decisions have increasingly become automated across legal, militaristic, medical, economic, media, and commercial spheres, as well as many other aspects of common life [1, 3, 4]. ADM biases may lead to the unfair policing and sentencing of black Americans as the legal systems increasingly seek efficiency and automation,  or to unequal banking decisions for people of different backgrounds, but similar credibility [1, 2]. As advances in ADM technologies are made, there must be a conscious and active effort to mitigate the influence of biases in the programs. The emerging issue is that the mitigation process is far from clear. However, strategies such as implementing clearer ethics-in-automation guidelines and increasing bias awareness must be integrated to deter the onset of algorithmic bias.

In order to address these issues, we must evaluate their origins and their importance. As previously stated, it might be expected that an “objective” algorithm would eliminate bias. However, algorithmic bias exists, and this is due to the human origin of the algorithm [1]. Furthermore, the use of unrepresentative data in ADM System training may result in the inheritance of pre-existing biases, as could technical limitations or bias in the System coder [1, 4]. The inheritance of algorithmic bias in ADM Systems is highly multifaceted, which has created the need for the development of ethics-by-design ADM Systems [1]. The impact of algorithmic bias is pressing, as it breaches the principles of objective fairness by potentially introducing systematic discriminatory tendencies, including racial and gender-based discrimination, while also limiting individual experiences of fairness [1, 2, 3,  5, 6]. Adjustment for racial discrimination by ADM Systems has a two-pronged importance: firstly, it counters a racially unjust system, which inherently calls for action; secondly, it is within the interests of corporations to develop nondiscriminatory automated decision-making systems, as bias in their system may unintentionally alienate consumers, as seen in allegations against Amazon for racist automated tendencies [2]. Similar to racial discrimination in ADM processes, sexist distributions of job advertisements reiterate the impact of algorithmic bias, as women were disproportionally shown less high-paying job advertisements than men [1]. Finally, in the global theater, it is essential that biases are eliminated as world powers begin to arm themselves with ADM systems, especially in “predictive policing” algorithms, as it has been demonstrated that African peoples have been disproportionately targeted by incorrect judgements [2, 4]. As ADM System automation continues to expand in scope, it is pivotal that algorithmic biases are prevented from influencing decision-making processes if ethical and just services are to be offered. Limiting bias-related discrimination should take precedence over the implementation of automation [1, 2, 3, 6]. Trustworthiness must be of the utmost priority in the general field of AI.

Even so, the question remains: how can algorithmic bias be remedied in ADM Systems? In recent research, many efforts have been focused on post-bias correction via algorithmic awareness, algorithmic accountability, and algorithmic transparency [1]. However, such solutions, while helpful in correcting issues with biased decisions, do not aid in the prevention of bias [1]. One suggested solution for bias-prevention is the use of “insider” perspective bias-elimination during the phase of algorithmic development, data collection, and entrainment [1]. Additionally, it has been found that to limit discrimination for a given variable, for example race, accurate datasets containing racial information can be entrained during the modelling of the algorithm [1]. Other research asserts alterations in ADM programs’ “choice architecture” (the manner in which ADM System decisions are displayed) can encourage a more user-engaged interface, helping establish fairness across ranking, recommendation, and matching decisions [3]. Finally, other research has investigated the various possibilities for achieving active fairness through modelling[1, 6]. One study stressed the importance of developing an algorithm which was modeled with specific “equal opportunity” priorities to achieve active fairness [6]. These research efforts demonstrate the capacity for ADM systems to be adjusted to an ethical standard. This reiterates that justice should not be sacrificed for automation, as both can coexist with the help of efforts to limit, lessen, and eliminate algorithmic biases.

As ADM development efforts continue to forage onward, it is important to note that despite the human-technology common ground of a shared susceptibility to bias, technology principally differs in its lacking of the human capacity to recognize its own biases. While humanity struggles to prevent the influence of bias, it is very apt at recognizing its own biases and proactively limiting their impacts. Technology, on the other hand, strikingly lacks the capacity to check or correct, in real-time, its own biases that arise from the data informing its decisions. However, just as humans have introduced biases to automated systems, we must extend our capacity to limit bias throughout the ADM systems development.  The goals of justice, which are furthered by research into bias-elimination, must take priority over automation. Efficiency should not cost equality, nor does it have to.


1. Aysolmaz, Banu, Deniz Iren, and Nancy Dau. “Preventing Algorithmic Bias in the
Development of Algorithmic Decision-Making Systems: A Delphi Study.” Proceedings of the 53rd Hawaii International Conference on System Sciences. 2020.
2. Bornstein, Aaron M. “Are Algorithms Building an Infrastructure of Racism?” Nautilus,
December 21, 2017.
3. Chakraborty, Abhijnan, and Krishna P. Gummadi. “Fairness in Algorithmic Decision
Making.” Proceedings of the 7th ACM IKDD CoDS and 25th COMAD. 2020. 367-368.
4. Kostopolous, Lydia. “The Role of Data in Algorithmic Decision-Making.” UNIDIR. 2019.
5. Lahoti, Preethi, Krishna P. Gummadi, and Gerhard Weikum. “ifair: Learning individually
fair data representations for algorithmic decision making.” 2019 IEEE 35th International Conference on Data Engineering (ICDE). IEEE, 2019.
6. Noriega-Campero, Alejandro, et al. “Active fairness in algorithmic decision making.”
Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. 2019.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s