Can AI bias be eliminated?

: When people are clamoring for AI into all aspects of human life, and think that artificial intelligence can make humans enter a new era of civilization and are full of infinite wonders, a new worry arises: artificial intelligence will also be produced like humans. All kinds of prejudice and discrimination, and this kind of prejudice and discrimination are born, it is humans who taught them. An article published recently in the British newspaper The Guardian pointed out that computers have absorbed entrenched concepts in human culture when learning human languages ​​and have created prejudice and discrimination.

While people are eager to achieve social fairness and justice, if artificial intelligence is prejudiced, it will be impossible to accomplish its task of replacing human work and serving humanity. This is another huge challenge for artificial intelligence applications. If this challenge cannot be solved, it is obviously not possible to place artificial intelligence on high hopes and give it a bigger, more arduous and noble mission.

AI bias and discrimination have long attracted attention. More typical is Microsoft's artificial intelligence chat robot Tay, which was launched on March 23, 2016. The original purpose of the design was to make it a friendly little girl to solve problems for users. However, on the first day of going online, Tay became a racist swearing idiom. He published many of the superior speeches of the white people. He even became a fan of Hitler and wanted to launch a genocide war. Seeing that it was not good, Microsoft immediately removed Tay and removed all unpleasant remarks. After the training, Tay went online again on March 30, 2016, but the old disease relapsed and had to go offline again.

Now, a study published in the journal Science (April 14, 2017) reveals that artificial intelligence has its roots in humans. Arvind Narayanan, a computer scientist at the Information Technology Policy Center at Princeton University in the United States, used online “crawler” software to collect 2.2 million words of English text to train a machine learning system. This system uses "text embedding" technology and is a statistical modeling technique commonly used in machine learning and natural language processing. It also includes the Implicit Association Test (IAT) used by psychologists to reveal human prejudices.

The core of text embedding is the use of an unsupervised learning algorithm called a global expression of words to train the statistical results of word-to-word co-occurrence in the word bank. When dealing with vocabulary, it mainly observes the words according to various factors. Inter-relevance, the frequency with which different words co-occur. As a result, semantic combinations and association scenarios similar to human language use appear on the most adjacent related words.

The more neutral result is that flowers are associated with women and music is associated with pleasure; but extreme results are lazy, and even criminals are associated with blacks; hidden "prejudice" links women to art, humanities and family. Closer, closer to men and mathematics and engineering.

In fact, this can not blame artificial intelligence, but to blame humans. Since the birth of mankind and in the evolution process, mankind has been filled with prejudices, and since human society has been formed, it has been filled with quite a few negative and human weaknesses, all of which will be embodied in human culture. The carrier of culture is language, so all prejudices can be found in language and semantics.

The church’s artificial intelligence is more objective and fair, at least more objective and fair than human beings. At present, it seems to be difficult to achieve. Because prejudice and discrimination in human culture are a kind of “original sin” born, human beings can only teach artificial intelligence to be more just and objective after removing their own original sins, or they can introduce the principle of mutual supervision of social supervision to teach and Supervise machinery fairness and fairness.

When artificial intelligence designed and developed by humans is not enough to be more objective, fair and just (justice), the application of artificial intelligence may be limited. For example, if artificial intelligence is used to deal with recruitment, unfair situations will appear as if they were handled manually, or even more, because if the applicant's name is of European American origin, the interviewing opportunities will be better than those of African-Americans. More than 50% of male applicants may have more interview opportunities than female applicants.

Even artificial intelligence reporters (writing software) can write, but due to prejudice (especially language use and semantic connections, Lenovo will inevitably produce prejudices), it can only let robots write financial statistics, but not writing For manuscripts of the survey category, it is even harder to write commentaries. In particular, the Simpson case should not be written. Otherwise, prejudice and discrimination will be reflected in the lines between the words.

When the artificial intelligence will naturally acquire the weaknesses of human prejudice and discrimination and fail to overcome it, we cannot expect too much from the prospect of artificial intelligence.

Type C Cable

Type C Cable

Pogo Technology International Ltd , https://www.wisesir.net

This entry was posted in on