Scientists found out why the AI might be racist and sexist


2017-04-21 13:00:11




1Like 0Dislike


Scientists found out why the AI might be racist and sexist

The Failed experiment of Microsoft, with its AI algorithm Tay (Tay), who within 24 hours after the beginning of interaction with people from Twitter has turned into a hardened racist, showed that the newly emerging AI systems can become victims of human prejudice and, in particular, stereotypical thinking. Why this happens – tried to find out a small group of researchers from Princeton University. And interestingly, they succeeded. In addition, they developed an algorithm that can predict the expression of social stereotypes on the basis of intensive analysis of how people communicate on the Internet.

Many AI systems undergo their training in understanding human language with the help of massive collections of text data. They are called corps. They are a kind of web archive around the Internet, containing 840 billion different tokens or words. Researcher Eileen Caliskan and her colleagues from Princeton center for information technology interested in – does the Common Crawl corpus (one of the most popular sites for learning AI), in fact created by millions of Internet users, stereotypical concepts, which could be detected using a computer algorithm. For this they resorted to the very non-standard method of test for hidden associations (Implicit Association Test, IAT), used to study attitudes and stereotypes of people.

Generally the test is as follows: people are asked to divide a certain set of words into two categories. The longer the person thinks, in what category to place a particular word, the less people associate that word with a particular category. In General, tests of IAT are used to measure the level of stereotyping people by associative structuring random set of words by such categories as gender, race, physical ability, age and so on. The result of such tests, as a rule, predictably. For example, the majority of respondents associate the word woman with a term such as "family", while a male with the concept of "work". However, the obvious and predictable results are proof of the usefulness of the IAT tests that point to our stereotypical thinking in their total weight. Among real scientists, of course, some debate about the accuracy of the IAT, but the majority agrees that these tests directly reflect our attitudes.

Using IAT tests as a model, Caliskan and her colleagues created an algorithm WEAT (Word-Embedding Association Test), analyzing the whole text fragments to find out which linguistic entities more closely tied together than others. Part of this test based on Stanford University developed the concept of the GloVe (Global Vectors for Word Representation), which computes a vector of semantic relations between words, that is, combines the related terms. For example, the word "dog" presented in the semantic vector model, will be associated with words like "puppy", "dog", "dog", "dog", "hound", and any other terms describing the dog. The essence of such semantic models is not to describe the word "dog", and how to describe the concept of a dog. That is, to understand what it is. This is especially important when you work with social stereotypes, when someone, for example, trying to describe the term "woman" with such notions as "girl" or "mother." Such models are widely used in computational linguistics. To simplify the work of researchers limited each semantic concepts of the three hundred vectors.

To determine how strong each concept of Internet has an Association with another concept within the text, the algorithm looks from WEAT on many factors. At the most basic level, explains Caliskan, the algorithm checks how many words shared by two individual concept (that is, verifies the proximity of their location within the test field), but also in accounting are other factors like the frequency of use of a word.

After conducting algorithmic transformation "closeness" of concepts in WEAT taken for the equivalent time, which requires a person to categorize a concept in the test IAT. The farther apart are the concepts, the more remote associative connection between them is built by a human brain. The algorithm WEAT work in this regard is to find the stereotypical relations, which were also detected in the framework of tests in the IAT.

"We actually adapted the IAT tests for cars. And our analysis showed that if you fed the AI the human data that includes stereotypes, then that's what he remembers" — comments Caliskan.

What's More, this set of stereotypical data will affect how AI will behave in the future. As an example, Caliskan leads the way the algorithm online translator Google Translate correctly translates words into English from other languages, based on the stereotypes that he had learned on the basis of gender information. Now imagine that the Internet is flooded an army of AI bots that reproduce all of our stereotypical notions that they are from us and typing. That is the future and waiting for us, if we seriously think about some method of remedial amendments stereotypical behaviour of such systems.

Despite the fact that Caliskan and her colleagues found that online language flooded social stereotyped notions and prejudices it has also been found complete and correct, associative rows. In one of the tests, the researchers found a strong Association between the concepts "woman" and "motherhood". This associative array to reflect the truth of the reality in which motherhood and parenting really is regarded mainly as women's problem.

"language is a reflection of the real world" — said Caliskan.

"Seizure of stereotypical concepts and statistical facts about the world around make engine model less accurate. But again, just go and delete all the stereotypical concepts of impossible, so we need to learn to work with what you have now. We have consciousness, we can make the right decisions instead of the biased variants. The machine has no consciousness. Therefore, experts of artificial intelligence necessary to endow machines with the ability to make decisions, not based on stereotypical and biased views."

And yet the solution to the problem of human language, according to researchers, is the man himself.

"I can't imagine many cases where it is not needed would be a person who would be able to check whether the right decision. The person will be aware of all the extreme cases in the adoption of a decision. Therefore decisions are made only after it becomes clear that they will not be biased."

In certain circles it is now vividly discussed topic about that robots will soon be able to take away our jobs. When will we get AI that can work for us, we have to invent new jobs for people who will conduct the test adopted AI solutions to those God forbid not have made them from a position of bias, which again, they learned from us. Take, for example chat bots. Even if they become completely independent, their original creation will be to engage people with their own prejudices and stereotypes. Therefore, as the stereotypical concepts are built into the concept of the language itself, to choosing the right solution, you still need people, no matter how advanced the AI system.

In the recently published article in the journal Science, Princeton scientists say that this situation can have serious and far-reaching consequences in the future.

"Our findings definitely still inflated reflected in the discussion of hypothesis Sepira — period. Our work shows that the behavior can be based on historical cultural norms. And in each case it can be different, because each culture has its own story."

In the recently released sci-Fi film "the Arrival" just touches upon the idea of hypothesis Sepira — Saxon, according to which the structure of a language influences the worldview and beliefs of its speakers. Now, thanks to the work Caliskan and her colleagues, we have the algorithm supporting this hypothesis. At least in relation to gender-biased stereotyped social concepts.

The Researchers want to continue their work, but this time to focus on other areas and to look has not been studied stereotyped characteristics in language. Perhaps the object of the study will be patterns created by false news in the media or stereotypical notion within certain subcultures or cultures with a geographical reference. In addition, the possibility of studies of other languages where stereotypical notions can be integrated into the language, not the way they are integrated into English.

"Suppose in the future in a specific culture or geographic place is beginning to show severe stereotypical thinking. Instead of trying to explore and check every single human factor, which will require a lot of time, money and effort, you can simply analyze the text data of a single group of people and based on this to find out – whether we're talking about the stereotypical perception or not. This will significantly save both means and time" — total researchers.



Can melatonin help treat coronavirus?

Can melatonin help treat coronavirus?

Some doctors believe that sleep hormone helps with coronavirus It appears that in the list of potential treatments COVID-19, which the researchers proposed for several months of the pandemic, another replenishment: melatonin. A doctor in Texas says h...

Which countries do the most active people live in?

Which countries do the most active people live in?

In which countries do people walk and play sports more? Let's find out. Physical activity, along with proper nutrition, is considered one of the main factors of a healthy lifestyle. Numerous scientific studies have found that regular exercise and wal...

What is the

What is the "love hormone" and who lacks it?

Oxytocin - love hormone The human body produces a huge variety of hormones, each of which performs its task. For example, many well-known endorphins bring people a sense of joy, and excess cortisol signals the stressful state of a person. Among all t...

Comments (0)

This article has no comment, be the first!

Add comment

Related News

SETI: aliens don't make contact, but we'll still find

SETI: aliens don't make contact, but we'll still find

Some researchers from the Institute for the search for extraterrestrial intelligence (SETI) believe that the best way of detecting aliens is scanning the cosmos and the search for laser beams. In one of the last major studies of t...

Astronomers have found another

Astronomers have found another "perfect" exoplanet

nowadays, astronomers are beginning to find potentially habitable exoplanets, but the latest find could be the most intriguing. The planet is called LHS 1140b, it is a bit more Land and is located approximately 40 light years from...

LHCb found new hints on possible deviations from the Standard model

LHCb found new hints on possible deviations from the Standard model

the LHCb Experiment, which operates in the framework of the Large hadron Collider at CERN, showed a curious anomaly in the decay of certain particles. If this information is confirmed we will receive new physical phenomena not pre...