Racial Biases

The+moral+quandaries+of+AI+decision+making.+

The moral quandaries of AI decision making.

With the advent of ChatGPT (a computer program that uses A.I. to comprehend and respond to natural language text) and similar software, Artificial Intelligence(AI) usage has rapidly increased. Whereas some are using the expansion of AI for relatively trivial purposes, such as essay assistance or gifting ideas, others are using its intelligence for more meaningful applications. In addition, many individuals and corporations have begun to entrust AI with essential choices such as employee filtering or housing allocation. The rationale behind the immense faith in AI is that its innate logic will prevent all possibilities of discriminatory input, thus leading to optimal decisions. However, various attempted implementations of AI suggest that this blind faith may be misguided.

 

In some instances, AI’s heavy reliance on crunching numerical data without providing sufficient context has led to implicit biases. To cite an example, a couple of years ago, Amazon set out to create the most efficient recruiting tool possible. They worked diligently to construct the software, were confident about its effectiveness, and even implemented it into their workflow. However, something strange occurred. The algorithm had started to flag words indicating a female interviewee as unfavorable and even started devaluing all women’s universities over their male counterparts (Mahdawi, Par 3). After the team researched more into this issue, they realized the primarily male applications over the previous decade had constructed a bias within the “fair system.” 

 

Other times, the limited inputs from programmers had led to AI performing precise variable analysis leading to flawed conclusions. For example, this was demonstrated in an algorithm that predicted which patients would need healthcare. Even though race wasn’t directly programmed, the algorithm accounted for healthcare cost history. Unfortunately, throughout American history, this number has always been lower for African-American individuals, thus leading to the learning model classifying them as deserving less within health care support. Luckily, in these two cases, the programmers caught the issues and could offset these biases. However, corporations need to take steps to prevent such events from occurring in the first place.

 

According to Harvard Business Review writer Andrew Burt, the first thing that needs to occur is mandating the incorporation of existing legal standards of Equality Laws in creating AI algorithms (Burt, Par 2). Doing so will at least remove the extreme biases present in these algorithms. Beyond this, however, it gets tricky. Programming software that desires to be objective but having it evaluating thousands of integrated US inequalities is arduous. Burt describes how the only way to do so thus far is heavy monitoring to check for biases, which eliminates AI’s beauty altogether.

 

AI has taken the world by storm over the last couple of years. However, it still has a long way to go before it can be entrusted with our most pertinent decisions.

Works Cited:

Burt, A. (2020, August 28). How to Fight Discrimination in AI. Harvard Business Review. https://hbr.org/2020/08/how-to-fight-discrimination-in-ai

 

Reuters. (2018, October 10). Amazon ditched AI recruiting tool that favored men for technical jobs. The Guardian; The Guardian. https://www.theguardian.com/technology/2018/oct/10/amazon-hiring-ai-gender-bias-recruiting-engine