Dylan McCreesh, Grade 12
In order to address these issues, we must evaluate their origins and their importance. As previously stated, it might be expected that an “objective” algorithm would eliminate bias. However, algorithmic bias exists, and this is due to the human origin of the algorithm . Furthermore, the use of unrepresentative data in ADM System training may result in the inheritance of pre-existing biases, as could technical limitations or bias in the System coder [1, 4]. The inheritance of algorithmic bias in ADM Systems is highly multifaceted, which has created the need for the development of ethics-by-design ADM Systems . The impact of algorithmic bias is pressing, as it breaches the principles of objective fairness by potentially introducing systematic discriminatory tendencies, including racial and gender-based discrimination, while also limiting individual experiences of fairness [1, 2, 3, 5, 6]. Adjustment for racial discrimination by ADM Systems has a two-pronged importance: firstly, it counters a racially unjust system, which inherently calls for action; secondly, it is within the interests of corporations to develop nondiscriminatory automated decision-making systems, as bias in their system may unintentionally alienate consumers, as seen in allegations against Amazon for racist automated tendencies . Similar to racial discrimination in ADM processes, sexist distributions of job advertisements reiterate the impact of algorithmic bias, as women were disproportionally shown less high-paying job advertisements than men . Finally, in the global theater, it is essential that biases are eliminated as world powers begin to arm themselves with ADM systems, especially in “predictive policing” algorithms, as it has been demonstrated that African peoples have been disproportionately targeted by incorrect judgements [2, 4]. As ADM System automation continues to expand in scope, it is pivotal that algorithmic biases are prevented from influencing decision-making processes if ethical and just services are to be offered. Limiting bias-related discrimination should take precedence over the implementation of automation [1, 2, 3, 6]. Trustworthiness must be of the utmost priority in the general field of AI.
Even so, the question remains: how can algorithmic bias be remedied in ADM Systems? In recent research, many efforts have been focused on post-bias correction via algorithmic awareness, algorithmic accountability, and algorithmic transparency . However, such solutions, while helpful in correcting issues with biased decisions, do not aid in the prevention of bias . One suggested solution for bias-prevention is the use of “insider” perspective bias-elimination during the phase of algorithmic development, data collection, and entrainment . Additionally, it has been found that to limit discrimination for a given variable, for example race, accurate datasets containing racial information can be entrained during the modelling of the algorithm . Other research asserts alterations in ADM programs’ “choice architecture” (the manner in which ADM System decisions are displayed) can encourage a more user-engaged interface, helping establish fairness across ranking, recommendation, and matching decisions . Finally, other research has investigated the various possibilities for achieving active fairness through modelling[1, 6]. One study stressed the importance of developing an algorithm which was modeled with specific “equal opportunity” priorities to achieve active fairness . These research efforts demonstrate the capacity for ADM systems to be adjusted to an ethical standard. This reiterates that justice should not be sacrificed for automation, as both can coexist with the help of efforts to limit, lessen, and eliminate algorithmic biases.
As ADM development efforts continue to forage onward, it is important to note that despite the human-technology common ground of a shared susceptibility to bias, technology principally differs in its lacking of the human capacity to recognize its own biases. While humanity struggles to prevent the influence of bias, it is very apt at recognizing its own biases and proactively limiting their impacts. Technology, on the other hand, strikingly lacks the capacity to check or correct, in real-time, its own biases that arise from the data informing its decisions. However, just as humans have introduced biases to automated systems, we must extend our capacity to limit bias throughout the ADM systems development. The goals of justice, which are furthered by research into bias-elimination, must take priority over automation. Efficiency should not cost equality, nor does it have to.
1. Aysolmaz, Banu, Deniz Iren, and Nancy Dau. “Preventing Algorithmic Bias in the
Development of Algorithmic Decision-Making Systems: A Delphi Study.” Proceedings of the 53rd Hawaii International Conference on System Sciences. 2020.
2. Bornstein, Aaron M. “Are Algorithms Building an Infrastructure of Racism?” Nautilus,
December 21, 2017.
3. Chakraborty, Abhijnan, and Krishna P. Gummadi. “Fairness in Algorithmic Decision
Making.” Proceedings of the 7th ACM IKDD CoDS and 25th COMAD. 2020. 367-368.
4. Kostopolous, Lydia. “The Role of Data in Algorithmic Decision-Making.” UNIDIR. 2019.
5. Lahoti, Preethi, Krishna P. Gummadi, and Gerhard Weikum. “ifair: Learning individually
fair data representations for algorithmic decision making.” 2019 IEEE 35th International Conference on Data Engineering (ICDE). IEEE, 2019.
6. Noriega-Campero, Alejandro, et al. “Active fairness in algorithmic decision making.”
Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. 2019.
One thought on “Mitigating Automated Discrimination”