Who Is Responsible for the Moral Consequences of Algorithmic Bias?
- Molly Bombard
- May 2
- 12 min read
Note: my footnotes are not showing here, but all sources can be found in my references at the bottom
1. Introduction
By 2024, there will be 28.7 million software developers across the globe. This rapid rise in demand for software engineers indicates an increasing global reliance on technology in the 21st century. As the capabilities of technology become more advanced, artificial intelligence (AI) is expanding into every facet of our lives, including the medical field, the judicial system, and the workplace. Although technology has the power to drive innovation and improve global conditions, many AI systems have demonstrated the propensity to perpetuate existing societal biases. When deciding how to program an algorithm, developers must make decisions that affect the entire population, including how the AI will affect its users in regards to justice-based concerns. Yet, in the United States, enforcing fairness in AI has proved challenging due to technical and societal hurdles. This paper posits that although web developers have the responsibility to avoid discrimination in their algorithms, the United States government ought to be the party held morally responsible for regulating AI fairness. This is due to three reasons: (1) the government’s role in creating the biases that are reflected in the algorithms, (2) the government’s duty to ensure justice for all citizens, and (3) the black box nature of proprietary algorithms.
2. Three Cases of AI Discrimination
In order to understand the ethical implications of AI, this paper will examine three specific cases across three different fields of interest: the criminal justice system, the medical field, and corporate hiring practices. I will later discuss the implications of AI discrimination in these areas.
2.1 AI Discrimination in Criminal Proceedings
One of the most notable instances of how artificial intelligence can perpetuate inequalities lies in the criminal justice system. Recently, AI has been used to predict rates of criminal recidivism in the United States, affecting whether a defendant is eligible for parole. In 2015, the technology company Northpointe created a racially discriminatory algorithm called the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) that is still among the most widely used risk-assessment tools in the country. A study by ProPublica found that software is two times more likely to falsely flag Black defendants as future criminals than white defendants. Further, white defendants were mislabeled as low-risk far more frequently than Black defendants. This in turn makes it more likely for police to profile Black citizens as criminals, further adding to the cyclical patterns of inequality in the United States.
2.2 AI Discrimination in the Workplace
When used in hiring practices, algorithms have the potential to reinforce prejudicial behavior. In 2014, Amazon created an algorithm to automatically sort through resumes and decide which candidates should move forward with a job interview. However, it was revealed that women were systematically barred from roles within Amazon’s engineering department. In general, 55% of American human resource managers said that AI would be a “regular part of their work within the next five years,” illustrating a clear need for ethical considerations within artificial intelligence. Fortunately, Amazon was able to correct this issue within a year of using the AI, but technologies such as the COMPAS software remain in use today.
2.3 AI Discrimination in the Medical Field
This issue becomes even more concerning in the context of one’s health. In 2019, the health services company Optum created an algorithm intended to predict which patients would benefit most from additional care. Specifically, the algorithm assigned risk scores to patients on the basis of their total healthcare costs accrued in one year. One might assume that a greater healthcare cost is associated with a greater health need. However, this was not the case for Black patients. The data revealed that Black individuals accrued healthcare costs $1,800 less per year than care given to a white person. Thus, Black patients had to be sicker than white patients before being referred for additional treatment. 17.7% of patients that the algorithm assigned for extra care were Black. If the results were unbiased, this number should be closer to 46.5% Given that Optum’s data analytics service, OptumInsights, serves 90% of United States hospitals, there is a clear moral issue at hand.
3. Are Software Developers to Blame?
When looking at these instances of AI discrimination, it is easy to foist the blame on the software developers who created the technology. After all, they do have a responsibility to ensure their algorithms are equitable and compliant with anti-discrimination laws. In fact, Amazon’s discriminatory hiring algorithm goes against Title VII of the Civil Rights Act of 1964—a law that prohibits employment discrimination based on race, color, religion, sex, and national origin.
Yet, it is doubtful that software developers should be held solely morally responsible for the consequences of the algorithms they create. Understanding this point requires a closer examination of the ways in which bias is introduced into algorithms.
Take the three examples above. In the first example, COMPAS was created from a set of 137 questions that are answered by the defendants. Such questions include “was one of your parents ever sent to jail or prison?” and “does a hungry person have a right to steal?” Notably, race was never a question in the algorithm’s assessment. Thus, how did the algorithm discriminate against Black defendants? The answer becomes clear when looking at the United States’ long history of discrimination and racism. Black defendants are more likely to answer ‘yes’ to questions regarding one’s family history with incarceration due to the fact that although Black Americans are 13% of the US population, they account for 38% of the prison population. The incarceration rate for white Americans is 450 individuals for every 100,000 people, yet for African Americans, the number reaches 2,306 individuals. Thus, this leads the algorithm to assign Black defendants as “high-risk,” seemingly due to their life circumstances. As the false flagging by COMPAS reveals, these factors mean little about who a person actually is—and thus how likely it makes them to recommit a crime. In fact, the study by ProPublica showed that only 20% of the defendants predicted to commit violent crimes went on to do so.
In the second example, the Amazon AI hiring software was trained using resumes from existing employees with the idea that they would attract those with the requisite skills to fill the position. In the Amazon engineering department, the majority of employees were male. This bias was reflected through the algorithm, causing resumes that included the word “woman” to be discarded. This was also the case for resumes with women's colleges listed on them. Unbeknownst to software developers, the fact that only 15.9% of United States’ engineers are female—a statistic that reflects decades of gender discrimination in the workplace and higher education—was naturally reflected in the software.
Unsurprisingly, Optum’s racially discriminatory medical algorithm incorporated societal biases despite race not being a factor in the AI’s development. Black patients were targeted as a result of systemic racism, a practice that seeded distrust in the healthcare system and still prevents minorities from receiving care today. There are endless examples of previous discriminatory practices in medicine, such as the infamous Tuskegee Study in the 1930s that studied the effects of untreated syphilis in Black populations instead of giving them readily available treatment. According to the Kaiser Family Foundation, only 59% of Black Americans say they trust doctors “almost all of the time” compared to 78% of white Americans. Thus, the algorithm reflected these trends in the Optum software.
4. Black Box Theory
Given this information, one might assume that software developers have the ability to remove identified biases from their algorithms, thus reversing trends in systemic patterns of discrimination. Yet, the truth is more complicated. For example, in the Amazon hiring case, when developers identified the discriminatory consequences of their algorithm, they were unable to rectify the disparate outcomes of the software and thus had to discontinue the use of the algorithm altogether. Why is it so difficult for software developers to rectify justice concerns in their AI?
The answer lies in the nature of artificial intelligence and the lack of algorithmic explainability in machine learning—an application of AI that enables systems to recognize patterns in data without being explicitly programmed to. Neural networks, a component of machine learning, are a series of algorithms that recognize patterns in vast amounts of data, mimicking the human brain’s ability to do the same. To create AI like Northpointe’s COMPAS, training data must be implemented to build these neural networks. Within the networks are millions of layers of nodes—data structures that process a given input and pass it to the next layer of nodes. This creates a substantial challenge for algorithmic explainability.
Due to the millions of nodes required in any given AI system, it is not often possible to see what the nodes have “learned” specifically. Thus, if the data used to train the AI contains bias, the algorithm will incorporate these biases, often to the ignorance of its software developers. Thus, algorithms are like black boxes—a term first used in aviation to record flight data. While it is possible to see a black box’s inputs and outputs, the inner workings of such systems are unknown and opaque. Even if a software developer wanted to change the outcome of a discriminatory algorithm, it can be impossible to identify where the unequal outcomes are manifesting in the AI, especially when the algorithm is built on training data with biases from decades of legislative and social discrimination.
5. Who Is to Blame?
If software developers cannot consistently predict, explain, or eliminate bias in algorithms due to the black box nature of AI and the difficulty of controlling implicit biases in seemingly objective training data, then who should be in charge of AI’s moral consequences? I believe the burden of responsibility lies on the United States government. This is due to two reasons: (1) the moral obligation of the Government to right the wrongs they have historically perpetuated and (2) the universal moral responsibility of governments to ensure that they are treating all citizens equally.
First, according to philosopher Thomas Pogge’s theory of global justice, governments have a duty to aid those they have hurt in the past. At the core of the theory, Pogge believes that governmental bodies have a collective responsibility to ensure justice for their constituents—both domestically and globally. Specifically, he argues that “insofar as human agents are involved in the design or administration of rules, practices, or organizations, they ought to disregard their private and local commitments and loyalties to give equal consideration to the needs” of those who have been negatively affected by their government’s interests.
In the United States, human rights violations were built into the American Constitution—one that once stated Black slaves were worth three-fifths of a free white person. In America, racial and sexual injustices have set minorities behind for decades. There are endless amounts of discriminatory policies, such as the nefarious practice of redlining that denied government-backed mortgages to minority neighborhoods. This policy created segregation, lending discrimination, and weakened racial tolerance, contributing to the fact that Black households have 14.5% of the total wealth possessed by white households, an absolute dollar gap of $838,220. Further, the War on Drugs,” a global campaign led by President Nixon in the 1970s, led to a disproportionate amount of African American arrests from discriminatory drug policies. Black individuals are now 3.6 times more likely to be arrested for selling drugs than white individuals. These practices are now reflected in algorithms such as COMPAS that racially target Black defendants.
A potential objector might argue that the United States has no such moral duty to aid those they have negatively impacted in the past. Yet, this does not justify the continuation of such harms. Philosopher John Rawls’ “justice as fairness” argument posits that “each person has the same and indefeasible claim to a fully adequate scheme of equal basic liberties.” Justice is a moral duty that ought to extend to all citizens. This indicates that the United States’ legal tolerance of discriminatory algorithms goes against fundamental principles of justice. Further, Rawls argues that in order to cultivate a just society “offices and positions must be open to all under the conditions of fair equality of opportunity.” In the Amazon case, women were barred from “offices and positions” historically dominated by men. Thus, the government must ensure fairness to all citizens—especially in the context of AI, where a lack of regulation has created deleterious impacts on the country's most vulnerable citizens.
6. What Can The Government Do?
Bioethicist Ben Memphram used such Rawlsian principles to propose The Ethical Matrix, a tool for decision-making processes in artificial intelligence. Mirroring Rawls’ “veil of ignorance,” The Ethical Matrix allows one to test decisions for fairness. Under this proverbial “veil of ignorance,” all parties are ignorant of their stake in the decision. They lack clues on their class, privilege, and potential disadvantages. This allows individuals to make decisions in a way that preserves fairness, as no one wants the ‘short end of the stick,’ so to speak. Similarly, The Ethical Matrix ensures respect for well-being, autonomy, and justice by considering the interests of users, affected citizens, technology providers, and the environment. Frameworks involving these considerations should be implemented by governmental bodies and extended to private technology companies. Further, the United States government must ensure that global hegemons such as Apple, Microsoft, and Google are implementing fairness into their algorithms. In the case of the discriminatory COMPAS algorithm, Northpointe denied that its algorithm had disparate outcomes and continued its use. Proprietary software is treated as a “trade secret’—a result of the increasing asymmetry of privacy afforded to companies versus individuals. Since companies like Northpointe are not required to publicly disclose the content of its software, neither the public nor affected individuals can ascertain how they are being discriminated againstI. Thus, the government must take initiative in ensuring equal treatment for its citizens.
In fact, there is precedent for the creation of a governmental body that specializes in digital fairness. The Federal Bureau of Investigation (FBI) now has a department dedicated to combating cybercrime, dubbed the National Cyber Investigative Joint Task Force (NCIJTF). Thus, creating a body dedicated to combating and identifying discriminatory AI in the marketplace is not unreasonable. There are plenty of agencies with similar regulatory purposes, including the Food and Drug Administration (FDA) and the Consumer Product Safety Commission (CPSC). As it stands, companies might unknowingly perpetuate disparities in race, gender, or sexual orientation. Thus, as algorithmic predictions become more ingrained into individuals’ lives, the government has the moral responsibility to ensure that discriminatory algorithms are not allowed in the marketplace.
7. Two Caveats
Although the government ought to be held morally responsible for mitigating the biases that are often incorporated into artificial intelligence, this does not mean moral developers can knowingly create algorithms that produce disparate outcomes. In order to promote justice, software developers— and the companies that employ them—must make full-faith efforts to create AI that incorporates fairness into their algorithms. Two conditions must be satisfied in this regard. First, developers must not purposefully or knowingly create discriminatory AI. Second, if they find out that one of their algorithms is unintentionally discriminating against minority groups, they must either fix the disparity or discontinue its use altogether. Both conditions ensure that software developers are actively promoting justice in the 21st century.
8. Conclusion
The rise of Artificial Intelligence has encouraged its use in moral arenas such as the medical field, judicial systems, and corporate hiring practices. Thus, arguments surrounding AI must involve ethical considerations. In the United States, little progress will be made against algorithmic bias and discrimination until those in power recognize their moral responsibility to ensure justice and fairness—principles supported by philosophers like John Rawls and Thomas Pogge, as well as the United States Constitution. This paper makes the argument that although software developers must make good faith efforts to ensure fairness in their AI, technological difficulties in identifying and mitigating societal biases—created by decades of discriminatory governmental policies— indicate that the Government ought to regulate fairness in AI. By implementing regulatory bodies and incorporating justice-based ethics into the digital landscape, technology can remain a powerful tool for innovation, rather than a means of perpetuating existing disparities in a user's environment.
Bibliography
Alsan, M., Wanamaker, M., & Hardeman, R. R. (2020). The Tuskegee study of untreated syphilis: A case study in peripheral trauma with implications for health professionals. Journal of general internal medicine, 35(1), 322-325.
Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias. In Ethics of Data and Analytics (pp. 254-264). Auerbach Publications.
Adams, K. (n.d.). 11 Numbers that Show How Big Optum's Role in Healthcare Is. Becker's Hospital Review. Retrieved November 18, 2022, from https://www.beckershospitalreview.com/healthcare-information-technology/11-numbers-that-show-how-big-optum-s-role-in-healthcare-is.html
Dastin, J. (2018). Amazon scraps secret AI recruiting tool that showed bias against women. In Ethics of Data and Analytics (pp. 296-299). Auerbach Publications.
Hamel, L., Lopes, L., Muñana, C., Artiga, S., & Brodie, M. (2020). KFF/The Undefeated survey on race and health. Kaiser Family Foundation (KFF)
Krogh, A. (2008). What are Artificial Neural Networks? Nature Biotechnology, 26(2), 195-197.
Ledford, H. (2019). Millions of Black People Affected by racial bias in health-care algorithms. Nature, 574(7780), 608-610.
Ohline, H. A. (1971). Republicanism and slavery: origins of the three-fifths clause in the United States Constitution. The William and Mary Quarterly: A Magazine of Early American History, 563-584
O’Neil, C., & Gunn, H. (2020). Near-Term Artificial Intelligence and the Ethical Matrix. Ethics of Artificial Intelligence, 235-69.
Pernik, P., Wojtkowiak, J., & Verschoor-Kirss, A. (2016). National cyber security organization: United States. NATO Cooperative Cyber Defence Centre of Excellence, Tallinn, Estonia.
Pogge, Thomas. “Poverty and Violence”. Law, Ethics and Philosophy, 2013, Num. 1, pp. 87-111, https://raco.cat/index.php/LEAP/article/view/294762, page 238.
Rawls, J. (1958). Justice as Fairness. The Philosophical Review, 67(2), 164–194. https://doi.org/10.2307/2182612
Selig, J. (2022, July 4). What is Machine Learning? . Expert AI . Retrieved November 18, 2022, from https://www.expert.ai/blog/machine-learning-definition/
Strauss, V. (2021, November 30). Where Are all the Women in Engineering? A Female Engineering Student Answers. The Washington Post. Retrieved November 18, 2022, from https://www.washingtonpost.com/news/answer-sheet/wp/2015/06/11/where-are-all-the-women-in-engineering-a-female-engineering-student-answers/
Title VII of the Civil Rights Act of 1964. U.S. Equal Employment Opportunity Commission. (1964). Retrieved November 18, 2022, from https://www.eeoc.gov/statutes/title-vii-civil-rights-act-1964
Vailshery, L. S. (2022, February 23). Global Developer Population 2024. Statista. Retrieved November 18, 2022, from https://www.statista.com/statistics/627312/worldwide-developer-population/
Wagner, P., & Sakala, L. (2014). Mass incarceration: The whole pie. Prison Policy Initiative, 12.
Weller, C. E., & Roberts, L. (2021). Eliminating the Black-White Wealth Gap is a Generational Challenge. Center for American Progress, March, 19, 2021.



Comments