#

Artificial Intelligence: Intelligent but Biased

Science Fields
Tags

You might remember the aftermath when Microsoft released its artificially intelligent chatbot, Tay last year. Only within 24 hours, Tay started cracking racist jokes after adapting to the language of Twitter users who interacted with it. Sadly, a new research also showed that the AI, just like humans, carefully tweezes the implicit biases in a language and starts using them.

Global Vectors for Word Representation (GloVe), a system that examines intertextual associations, found that every sort of human bias also surfaces in artificial systems. Aylin Caliskan, a postdoctoral researcher in computer science at Princeton University stated that artificial intelligence, which collects information from supposedly neutral sources like Wikipedia or news articles, entirely reflects basic human biases.

The research used a tool called the Implicit Association (IAT). For example, think of a flower name. People were asked to associate this word (the flower’s name) with pleasant or unpleasant concepts like "pain" or "beauty". As expected, flowers were associated with positive concepts, while weapons were associated with negative ones. The same method was used to reveal people’s associations about social or demographic groups. Can you guess which group they associated weapons with? Black Americans.

Findings drag researchers to a debate: Do people act on strictly personal reasons, indigenised biases they aren't aware of, or do they adopt these from the nature of their language, via the language itself? In other words, could these be learned biases?

Caliskan and her team developed another test (WEAT: Word-Embedding Association Test) to compare the data that GloVe provided on the strength of associations between words. Each comparison returned parallel results to those of IAT.

And worse…

  • European-American names had more pleasant associations compared to African-American names.

·         Male names were more readily associated with career-related words, and female names more readily with domestic words.

·         Men were more closely associated with math and science, while women with the arts.

·         Names associated with old people were more unpleasant than names associated with young people!

Interestingly, the GloVe results were in total harmony with the data from U.S. Bureau of Labor Statistics, with a 90 percent correlation. This means that the programs, which learn from human language, make very accurate statements about our world and culture.

However, artificial intelligence is not successful at understanding context that humans grasp easily. For example, while an article about Martin Luther King Jr. being jailed for civil rights protests is likely to bring negative associations with African-Americans, humans know that it is a story of an American hero displaying a righteous protest. But for the computer, the equation is: Martin Luther King = black = jail.

"In a biased situation, we know how to make the right decision," Caliskan says, "but unfortunately, machines are not self-aware."

REFERENCES

  • 1. http://www.livescience.com/58675-artificial-intelligence-learns-biases-from-human-language.html