Artificial Intelligence Displays Human Prejudices

Fagjun | Published 2017-04-15 06:19

Scientists have found that we have transferred our racial and gender prejudices to artificial intelligence programs.

When we think of artificial intelligence, we don't really think that it has the same biases that human beings have. We think of AI as almost coldly rational and logical, while biases are generally irrational and illogical. It may thus come as a surprise that AI can exhibit biases and prejudices as well.

However, AI isn't inherently biased and prejudiced. “This is showing we’re prejudiced and that AI is learning it,” says Joanna Bryson, one of the study's authors.

Thus, it's not really the AI's fault. It's ours. The problem, however, is that though we have biases, we also have the capacity to counteract them. AI, meanwhile, doesn't. The researchers found that AI programs reinforce our biases in unpredictable ways.

Word Embedding, Bias Embedding

Artificial intelligence programs use a machine learning tool called “word embedding”. Simply put, word embedding turns words and text into numbers. These numbers, in turn, enable machine learning. Word embedding enables natural language understanding and enables computers to interpret text as well as speech. This tool has been instrumental in helping computers “understand” language.

Thus, the algorithms we input into AI programs comes from our own use of our own languages, which includes our biases and prejudices. According to the findings of the research, these algorithms acquire biases people exhibit in psychology experiments. For example, AI programs closely associate feminine nouns with occupations in arts and humanities. Meanwhile, the programs associated masculine nouns with occupations in engineering and science.

Here, Google Translate exhibits a gender bias concerning professions. The Filipino third-person pronoun "siya" is gender-neutral. When the algorithm translated the sentences to English, it translated "siya" to the masculine "he" when the profession in question was in medicine. Meanwhile, it translated "siya" to the feminine "she" when the profession in question was teaching.

Programs were also more likely to associate European-American names with positive words. Conversely, they associate African-American names with more negative words. These findings concur with the findings of a study on implicit associations. Participants in the study, who were from the US and the UK, matched positive words with Caucasian faces.

Scrubbing Algorithms Clean of Prejudices

The AI programs will retain these biases unless programmers address the problem. However, it's best if we don't see the algorithms themselves as problematic. Rather, they present an opportunity to address prejudices when we see them.

Addressing and counteracting prejudices in AI programs is challenging, however. Programmers will have to find a way to address and remove biases and prejudices in algorithms without compromising their ability to interpret words and meanings. However, it's an important task and the researchers urge programmers to rise to the challenge.

Hey! Where are you going?? Subscribe!

Get weekly science updates in your inbox!