Correcting for Bias in Automated Decision Making: How to Better Solve for Disparities in Algorithmic Outputs

Arnav Hak, Grade 10

Introduction
          In today’s digitalized world, people are constantly encountering automated decisions made by ADM (Automated Decision-Making) systems. ADM systems refer to technical systems that aim to aid or replace human decision-making in society by deriving conclusions based on given datasets (1). These systems are increasingly becoming more common within local, state, and federal agencies, yet their use is somewhat rare in the general public. While ADM systems show great promise, they pose numerous implementation issues that must be dealt with (2). Just like humans, algorithms can contain bias, leading to various amounts of issues. Developers must find an approach that will allow them to solve the issue before it occurs and damages individuals impacted by it (3). This paper will discuss what exactly algorithm bias is, how it can enter a system, as well as what are the best ways to mitigate it. The solutions presented in this paper can lead us on a path to fair and reliable usage of ADM systems.

Sources of Algorithmic Bias
          Algorithmic bias occurs when the datasets used do not include certain variables that can properly reflect the scenario we want to predict. The algorithm’s performance is a direct byproduct of the datasets used to train them. The use of algorithms should lead to fairer decision-making since algorithms are impartial and not inherently biased (4). However, it has been shown over several scenarios that ADM systems have discriminated based on biases that are common within our own society. Algorithmic biases lead to ADM systems acting not in objective fairness, but instead to unfairly discriminate because of the nature of the datasets, which completely diminishes the goal of such systems. During the process of developing ADM systems, bias can enter it during any step of the way. The two main ways bias is introduced is the omission of unrepresentative data, as well as the collection and selection of training data, leading to the conclusion that human error can be a factor. An example of algorithmic bias is the COMPAS Recidivism Algorithm, which is used to assess the probability of a convicted criminal committing a crime again. After extensive studies of the datasets used for the algorithm, it was found white criminals were underrepresented in the datasets, causing the derived conclusion to seem as if through a predominantly black neighborhood had higher crime and recidivism rates than the surrounding areas (5). Another example is that on Google, women are less likely to be shown high-paying jobs (gender determined by their search history and profile information, another example of algorithmic bias), compared to men. Automated decisions like these have the ability to cause detrimental damage to individuals who are impacted by them (6).

Algorithmic Awareness
          Understanding the issue at hand is always the first step to solving any problem. Without the general public grasping an understanding of what algorithms do, people will take automated decisions at face value, instead of learning and adapting to what is shown (7). With an understanding of automated decision-making, there will be fewer repercussions as well as the ability to confront these issues. Because of how dependent people are on technology, people will and have already begun not questioning whatever comes their way on their screen, whether they be ads, recommendations, navigation, etc. In order to build algorithmic awareness, we must start questioning the decisions that are being laid in front of us. Only then, will the public be able to prevent algorithm biases.In order to spread awareness effectively, people must create a wider understanding of the role algorithms play within our society as well as our daily lives. With a wider understanding, people will be able to raise awareness in public debates on issues that are showing from algorithm usage (8). People will then be able to propose solutions for any number of issues, ranging from policies, technological usage, as well as limitations to combat algorithmic biases (9).

Algorithmic Transparency
          It’s no secret that we rely on technology to go about our daily lives. Because of that, it’s important to have transparency as to how what we use is being developed and how people can make it better outside the developers. Because everyday decisions are becoming more automated and processed by algorithms, processes like these are becoming less accountable. These automated decisions bear risks of secret profiling and discrimination, as well as undermining the public’s right to privacy.  Algorithmic transparency plays a key role in defending human rights.
          There are several options to have algorithmic transparency, one being making algorithms publicly accessible. With having algorithms publicly accessible, external parties would be able to feed in their own data sets, examine results, as well as report cases where the model doesn’t give the peer-reviews feedback. Even without sharing the internal workings of an algorithm, people will still be able to detect bias (10). To protect intellectual property, certain algorithms could have blank data sets, and allow the user to input their own. Another option can be making data sets available to the public. Doing so would reduce the issues of underrepresentation of data in certain models. This, in turn, will allow other models to carry the same data sets, making them more reliable as well. Already, there are hundreds of data sets for the public (11), however, it’s clearly not enough to offer reliable outputs. It can be argued that sharing data to the public could actively users to whom the data belongs to, however, there are several ways to preserve the anonymity of the users, and these ways are already being used.

Conclusion
          Because today’s world is heavily reliant on technology, it’s important that what we use should provide fair and reliable results. Right now, algorithmic bias has the ability to be an effect on millions of individuals, from choosing a navigation route to determining a sentence for a convict. Whatever the scenario, it’s important that we receive the most reliable outputs, as we are putting our lives on the decisions made by algorithms.

References

1. Tolan, Songül, Fair and Unbiased Algorithmic Decision Making: Current State and Future
Challenges. European Commission, Joint Research Centre, (2019)
2. J. Manyika et al., What Do We Do About the Bias in AI?. Harvard Business Room, (2019)
3. S. Worrall, Computers tell us who to data, who to jail: But should they?. National Geographic, (2018)
4. S. Barocas, Governing Algorithms: A Provation Piece. New York University, (2013)
5. M. Sears, AI bias And The ‘People Factor in AI development. Forbes, (2018)
6. J. Larson et al., How We Analyzed the COMPAS Recidivism Algorithm. ProPublica, (2016)
7. S. Gibbs, Women less likely to be shown ads for high-paid jobs on Google, study shows. The Guardian, (2015)
8. E-commerce and Platforms, Algorithmic Awareness Building. European Commission, (2020)
9. L. Sweeney, Discrimination in Online Delivery. Acmqueue, (2013)
10. NT. Lee et al., Algorithmic bias detection and mitigation: Best practices and policies to reduce consumer harms. Brookings Institution, (2019)
11. Dr. M Rovatsos et al., Bias in Algorithmic Decision-Making. Centre for Data Ethics and Innovation, (2019)

Advertisement

One thought on “Correcting for Bias in Automated Decision Making: How to Better Solve for Disparities in Algorithmic Outputs

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s