Improve the fairness of algorithm -based decision -making
May 15, 2020 01:57 Source: "China Social Sciences", May 15, 2020, No. 1926 Author: Reporter Wang Youran

  Current,Public sector and private sectors are increasingly using algorithms to automatically generate decisions。This reality has triggered a lot of discussions: Is the decision -based decision -based decision -making is fair and transparent? Can make decisions by machines to avoid prejudice of human decision -making?。
  
  Algorithm decision has two sides
  
  Chairman of the McKinsey Global Research Institute James Manca and senior business analyst Jack Hillberg said,Challenges related to the fairness of computer algorithm are not new things。1988,A medical school in London used to determine which applicants can get the computer program that can get interview opportunities,There are situations that discriminate against women and non -European surname applicants。But,The accuracy of this program is higher than the accuracy of human decision -making,And the proportion of non -European students admitted by this medical school is higher than most of London's medical schools。That is to say,The result of the output of this prejudice algorithm is still fair bet365 best casino games than many human decisions。

  With the large -scale digitalization of the data,The algorithm is getting more and more complicated,Its application scope is becoming more and more extensive,Algorithm -based decisions will give individuals and group life、Social and economic operations have the effect that cannot be ignored。Many experts are welcome to the algorithm decision,and regard it as a "antidote" for the long -term human prejudice that cure long -term existence of human prejudice。In many cases,Algorithms can reduce human interpretation of data。

  Of course,Automatic algorithm expands the existing cases of human prejudice is not uncommon。Research on Latania Savini, a professor at Harvard University in the United States,Search for African -American names on the Internet,The chance of querying the arrest record advertisement is greater。Similarly,People with a typical African -American name are more likely to be positioned as a high -interest credit card user。

  Professor of the University of Pennsylvania, Cartak Hosanaga, said,Algorithm prejudice is easier to cause large -scale negative effects than human prejudice。Many people think that the algorithm is rational、Never make mistakes,Therefore, some election results of the use of algorithms bet365 Play online games appeared、Event for manipulating the market。Another,If a "bad judge" decision may affect thousands of people,Then a "bad algorithm" may affect billions of people。

  Associate Professor of Computer Science, Northeast University, Christo Wilson, said,Careful design、Automatic decision algorithms that have been appropriately reviewed and thoroughly reviewed may be fair than humans。People respond to the algorithm for long -term monitoring,Make sure that even if social conditions change,The algorithm can continue to run in the correct way。If there is no these guarantees,Algorithms may exacerbate existing human prejudice。
  
  Two major reasons lead to algorithm prejudice
  
  Sauron Barolaks, assistant professor of information sciences at Cornell University in the United States, mentioned,The use of data to solve the problem may be unfriendly for specific groups。Researcher Nicole Tona Lee, a researcher at the technical innovation center of the Brookings Society in the United States and Paul Risnik, a professor of information science at the University of Michigan in the United States,There are two important reasons for forming algorithm prejudice,Historically human prejudice and incomplete or representative training data。

  Historically human prejudice comes from the unfair views of specific groups,This prejudice Bet365 lotto review can easily be copied and amplified by the computer model。Similar events have occurred in Amazon, USA。In the recruitment algorithm used by the company,The algorithm "teach" of the designer to score the job seeker by identifying the vocabulary in the resume,And with the engineer department of Amazon as the reference standard。Because the engineer department is mainly male,Therefore, the algorithm reduces the score of the job seeker containing the word "female" in the resume。

  Of course,Some algorithm prejudice is not intentional,May be strengthened and continued with the user's unknown。For example,African Americans, which is positioned as a target user of a high -interest credit card target, when the corresponding advertisement is opened,May not be conscious that you will continue to receive such push。This negative feedback cycle may lead to an increase in prejudice of the algorithm。

  When the representatives of some people used in the data used for training algorithms are lower,Form -based decision -making will be unfavorable to them。Computer scientist Joe Blanvini, a computer scientist at Massachusetts Institute of Technology in the United States, proposed in the study,Facial recognition software Differential gender according to specific facial features,but insufficient training data from women from deep bet365 best casino games skin colors,Therefore, the algorithm will misjudge the gender of such women。Inevanties,Training data is too concentrated in some people, which will also cause results to tilt。A report released by the Law School of Georgetown University in the United States,In the facial recognition database used by law enforcement agencies,African -American proportion is higher than white,This means that the probability of African -American being judged/misjudgment is higher probability of criminal suspects。

  Hosana told this reporter,In psychology and genetics,Human behavior is often attributed to genes and environmental impacts carried at birth,That is the factor of "congenital and acquired"。The algorithm is not different。The behavior of the early computer algorithm is designed by its human designer with a program,equivalent to "congenital"; a large number of logical learning of modern algorithms comes from real world data,equivalent to "the day after tomorrow"。With the widespread application of artificial intelligence,Many prejudice and unpredictable behaviors of modern algorithms originate from training data。
  
  Enhanced algorithm fair
  
  Wilson said,Researchers are currently committed to improving the fairness of machine learning algorithms,Exploring many methods to detect and reduce algorithm Bet365 lotto review bias。Scholars from colleges and universities in Harvard College in the United States published a study in 2019,Analysis of various measures to enhance the fairness of machine learning。Another,People can open the "black box" of the machine learning system,Understand what the machine learning model is doing、Why is it so decided。Research indicates,It is possible to achieve fairness while not lowering accuracy。Human judgment is an important aspect to ensure the fairness of algorithms,Human needs to judge the meaning of "fairness"。

  Hosanagaga said,Enhanced algorithm fair,First, we must increase the transparency of the logic behind the data and algorithm decision;,Formally review and test the algorithm。Supervision of algorithm -based decision -making can be implemented with some "algorithm claim bill"。This bill should include four key points: describe the details of the collection of training data and training data; the steps of explaining the algorithm by ordinary people can easily understand the algorithm; The control of the algorithm; establishing a sense of responsibility on automatic decision -making may cause accidental consequences。

Editor in charge: Changchang
QR code icons 2.jpg
Key recommendation
The latest article
Graphics
bet365 live casino games
Video

Friendship link: Official website of the Chinese Academy of Social Sciences |

Website filing number: Jinggong.com Anxian 11010502030146 Ministry of Industry and Information Technology: Beijing ICP No. 11013869

All rights reserved by China Social Sciences Magazine shall not be reprinted and used without permission

General Editor Email: zzszbj@126.com This website contact information: 010-85886809 Address: Building 1, No. 1, No. 15, Guanghua Road, Chaoyang District, Beijing: 100026